Test Report: Docker_Linux_crio 21139

                    
                      acfd8b7155af18aff79ff1a575a474dfb6fd930f:2025-10-09:41835
                    
                

Test fail (56/166)

Order failed test Duration
27 TestAddons/Setup 517.58
38 TestErrorSpam/setup 500.82
47 TestFunctional/serial/StartWithProxy 501.89
49 TestFunctional/serial/SoftStart 366.58
51 TestFunctional/serial/KubectlGetPods 2.19
61 TestFunctional/serial/MinikubeKubectlCmd 2.19
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 2.17
63 TestFunctional/serial/ExtraConfig 736.04
64 TestFunctional/serial/ComponentHealth 1.9
67 TestFunctional/serial/InvalidService 0.05
70 TestFunctional/parallel/DashboardCmd 1.66
73 TestFunctional/parallel/StatusCmd 2.97
77 TestFunctional/parallel/ServiceCmdConnect 2.24
79 TestFunctional/parallel/PersistentVolumeClaim 241.53
83 TestFunctional/parallel/MySQL 1.35
89 TestFunctional/parallel/NodeLabels 1.36
94 TestFunctional/parallel/ServiceCmd/DeployApp 0.07
95 TestFunctional/parallel/ServiceCmd/List 0.29
96 TestFunctional/parallel/ServiceCmd/JSONOutput 0.37
98 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.37
99 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
102 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0.08
103 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 107
104 TestFunctional/parallel/ServiceCmd/Format 0.27
105 TestFunctional/parallel/ServiceCmd/URL 0.27
109 TestFunctional/parallel/MountCmd/any-port 2.5
119 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.89
120 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.93
122 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.06
123 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.3
125 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
126 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.36
141 TestMultiControlPlane/serial/StartCluster 502.12
142 TestMultiControlPlane/serial/DeployApp 102.01
143 TestMultiControlPlane/serial/PingHostFromPods 1.32
144 TestMultiControlPlane/serial/AddWorkerNode 1.49
145 TestMultiControlPlane/serial/NodeLabels 1.3
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.56
147 TestMultiControlPlane/serial/CopyFile 1.54
148 TestMultiControlPlane/serial/StopSecondaryNode 1.6
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 1.56
150 TestMultiControlPlane/serial/RestartSecondaryNode 52.95
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.58
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 370.17
153 TestMultiControlPlane/serial/DeleteSecondaryNode 1.77
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.54
155 TestMultiControlPlane/serial/StopCluster 1.36
156 TestMultiControlPlane/serial/RestartCluster 368.35
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.58
158 TestMultiControlPlane/serial/AddSecondaryNode 1.53
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.59
163 TestJSONOutput/start/Command 497.14
166 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestMinikubeProfile 500.52
221 TestMultiNode/serial/ValidateNameConflict 7200.056
x
+
TestAddons/Setup (517.58s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-246638 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-246638 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: exit status 80 (8m37.548089151s)

                                                
                                                
-- stdout --
	* [addons-246638] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "addons-246638" primary control-plane node in "addons-246638" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 17:56:22.349552   16230 out.go:360] Setting OutFile to fd 1 ...
	I1009 17:56:22.349787   16230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 17:56:22.349796   16230 out.go:374] Setting ErrFile to fd 2...
	I1009 17:56:22.349799   16230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 17:56:22.349960   16230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 17:56:22.350457   16230 out.go:368] Setting JSON to false
	I1009 17:56:22.351195   16230 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2330,"bootTime":1760030252,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 17:56:22.351280   16230 start.go:141] virtualization: kvm guest
	I1009 17:56:22.353222   16230 out.go:179] * [addons-246638] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 17:56:22.354488   16230 notify.go:220] Checking for updates...
	I1009 17:56:22.354536   16230 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 17:56:22.355865   16230 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 17:56:22.357261   16230 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 17:56:22.358701   16230 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 17:56:22.360163   16230 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 17:56:22.361581   16230 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 17:56:22.363211   16230 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 17:56:22.386832   16230 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 17:56:22.386904   16230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 17:56:22.447904   16230 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-09 17:56:22.437160101 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 17:56:22.448045   16230 docker.go:318] overlay module found
	I1009 17:56:22.449982   16230 out.go:179] * Using the docker driver based on user configuration
	I1009 17:56:22.451146   16230 start.go:305] selected driver: docker
	I1009 17:56:22.451161   16230 start.go:925] validating driver "docker" against <nil>
	I1009 17:56:22.451173   16230 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 17:56:22.451684   16230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 17:56:22.504975   16230 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-09 17:56:22.495674846 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 17:56:22.505132   16230 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 17:56:22.505426   16230 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 17:56:22.507095   16230 out.go:179] * Using Docker driver with root privileges
	I1009 17:56:22.508287   16230 cni.go:84] Creating CNI manager for ""
	I1009 17:56:22.508341   16230 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 17:56:22.508349   16230 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 17:56:22.508421   16230 start.go:349] cluster config:
	{Name:addons-246638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-246638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1009 17:56:22.509724   16230 out.go:179] * Starting "addons-246638" primary control-plane node in "addons-246638" cluster
	I1009 17:56:22.510757   16230 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 17:56:22.511976   16230 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 17:56:22.513074   16230 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 17:56:22.513102   16230 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 17:56:22.513116   16230 cache.go:64] Caching tarball of preloaded images
	I1009 17:56:22.513117   16230 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 17:56:22.513213   16230 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 17:56:22.513228   16230 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 17:56:22.513572   16230 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638/config.json ...
	I1009 17:56:22.513606   16230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638/config.json: {Name:mk52cf9bfad34036b46966f45fea65eee50581d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 17:56:22.529671   16230 cache.go:162] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1009 17:56:22.529801   16230 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1009 17:56:22.529822   16230 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory, skipping pull
	I1009 17:56:22.529831   16230 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in cache, skipping pull
	I1009 17:56:22.529855   16230 cache.go:165] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 as a tarball
	I1009 17:56:22.529864   16230 cache.go:175] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from local cache
	I1009 17:56:35.311717   16230 cache.go:177] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 from cached tarball
	I1009 17:56:35.311756   16230 cache.go:242] Successfully downloaded all kic artifacts
	I1009 17:56:35.311793   16230 start.go:360] acquireMachinesLock for addons-246638: {Name:mke0e9b67633e860dfec0cd9a501c85fb54933b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 17:56:35.311898   16230 start.go:364] duration metric: took 81.71µs to acquireMachinesLock for "addons-246638"
	I1009 17:56:35.311925   16230 start.go:93] Provisioning new machine with config: &{Name:addons-246638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-246638 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 17:56:35.312035   16230 start.go:125] createHost starting for "" (driver="docker")
	I1009 17:56:35.314098   16230 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1009 17:56:35.314377   16230 start.go:159] libmachine.API.Create for "addons-246638" (driver="docker")
	I1009 17:56:35.314408   16230 client.go:168] LocalClient.Create starting
	I1009 17:56:35.314516   16230 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem
	I1009 17:56:35.721990   16230 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem
	I1009 17:56:36.136595   16230 cli_runner.go:164] Run: docker network inspect addons-246638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 17:56:36.153378   16230 cli_runner.go:211] docker network inspect addons-246638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 17:56:36.153447   16230 network_create.go:284] running [docker network inspect addons-246638] to gather additional debugging logs...
	I1009 17:56:36.153463   16230 cli_runner.go:164] Run: docker network inspect addons-246638
	W1009 17:56:36.169345   16230 cli_runner.go:211] docker network inspect addons-246638 returned with exit code 1
	I1009 17:56:36.169389   16230 network_create.go:287] error running [docker network inspect addons-246638]: docker network inspect addons-246638: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-246638 not found
	I1009 17:56:36.169409   16230 network_create.go:289] output of [docker network inspect addons-246638]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-246638 not found
	
	** /stderr **
	I1009 17:56:36.169510   16230 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 17:56:36.185883   16230 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d98590}
	I1009 17:56:36.185925   16230 network_create.go:124] attempt to create docker network addons-246638 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 17:56:36.185970   16230 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-246638 addons-246638
	I1009 17:56:36.283860   16230 network_create.go:108] docker network addons-246638 192.168.49.0/24 created
	I1009 17:56:36.283892   16230 kic.go:121] calculated static IP "192.168.49.2" for the "addons-246638" container
	I1009 17:56:36.283960   16230 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 17:56:36.301249   16230 cli_runner.go:164] Run: docker volume create addons-246638 --label name.minikube.sigs.k8s.io=addons-246638 --label created_by.minikube.sigs.k8s.io=true
	I1009 17:56:36.374204   16230 oci.go:103] Successfully created a docker volume addons-246638
	I1009 17:56:36.374309   16230 cli_runner.go:164] Run: docker run --rm --name addons-246638-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-246638 --entrypoint /usr/bin/test -v addons-246638:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 17:56:43.116939   16230 cli_runner.go:217] Completed: docker run --rm --name addons-246638-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-246638 --entrypoint /usr/bin/test -v addons-246638:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib: (6.742588566s)
	I1009 17:56:43.116967   16230 oci.go:107] Successfully prepared a docker volume addons-246638
	I1009 17:56:43.117011   16230 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 17:56:43.117029   16230 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 17:56:43.117079   16230 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-246638:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 17:56:47.571727   16230 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-246638:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.454581031s)
	I1009 17:56:47.571769   16230 kic.go:203] duration metric: took 4.454737479s to extract preloaded images to volume ...
	W1009 17:56:47.571852   16230 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 17:56:47.571891   16230 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 17:56:47.571929   16230 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 17:56:47.627891   16230 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-246638 --name addons-246638 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-246638 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-246638 --network addons-246638 --ip 192.168.49.2 --volume addons-246638:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 17:56:47.930957   16230 cli_runner.go:164] Run: docker container inspect addons-246638 --format={{.State.Running}}
	I1009 17:56:47.950875   16230 cli_runner.go:164] Run: docker container inspect addons-246638 --format={{.State.Status}}
	I1009 17:56:47.972797   16230 cli_runner.go:164] Run: docker exec addons-246638 stat /var/lib/dpkg/alternatives/iptables
	I1009 17:56:48.020458   16230 oci.go:144] the created container "addons-246638" has a running status.
	I1009 17:56:48.020491   16230 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/addons-246638/id_rsa...
	I1009 17:56:48.495644   16230 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-11374/.minikube/machines/addons-246638/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 17:56:48.522369   16230 cli_runner.go:164] Run: docker container inspect addons-246638 --format={{.State.Status}}
	I1009 17:56:48.540497   16230 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 17:56:48.540517   16230 kic_runner.go:114] Args: [docker exec --privileged addons-246638 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 17:56:48.588202   16230 cli_runner.go:164] Run: docker container inspect addons-246638 --format={{.State.Status}}
	I1009 17:56:48.607574   16230 machine.go:93] provisionDockerMachine start ...
	I1009 17:56:48.607657   16230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246638
	I1009 17:56:48.625466   16230 main.go:141] libmachine: Using SSH client type: native
	I1009 17:56:48.625715   16230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1009 17:56:48.625732   16230 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 17:56:48.773479   16230 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-246638
	
	I1009 17:56:48.773507   16230 ubuntu.go:182] provisioning hostname "addons-246638"
	I1009 17:56:48.773568   16230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246638
	I1009 17:56:48.791608   16230 main.go:141] libmachine: Using SSH client type: native
	I1009 17:56:48.791807   16230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1009 17:56:48.791822   16230 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-246638 && echo "addons-246638" | sudo tee /etc/hostname
	I1009 17:56:48.946807   16230 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-246638
	
	I1009 17:56:48.946871   16230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246638
	I1009 17:56:48.964798   16230 main.go:141] libmachine: Using SSH client type: native
	I1009 17:56:48.965044   16230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1009 17:56:48.965076   16230 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-246638' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-246638/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-246638' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 17:56:49.110734   16230 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 17:56:49.110762   16230 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 17:56:49.110808   16230 ubuntu.go:190] setting up certificates
	I1009 17:56:49.110821   16230 provision.go:84] configureAuth start
	I1009 17:56:49.110880   16230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-246638
	I1009 17:56:49.128560   16230 provision.go:143] copyHostCerts
	I1009 17:56:49.128634   16230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 17:56:49.128741   16230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 17:56:49.128806   16230 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 17:56:49.128858   16230 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.addons-246638 san=[127.0.0.1 192.168.49.2 addons-246638 localhost minikube]
	I1009 17:56:49.597407   16230 provision.go:177] copyRemoteCerts
	I1009 17:56:49.597463   16230 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 17:56:49.597506   16230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246638
	I1009 17:56:49.615171   16230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/addons-246638/id_rsa Username:docker}
	I1009 17:56:49.718332   16230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 17:56:49.737426   16230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 17:56:49.754734   16230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 17:56:49.771589   16230 provision.go:87] duration metric: took 660.75429ms to configureAuth
	I1009 17:56:49.771613   16230 ubuntu.go:206] setting minikube options for container-runtime
	I1009 17:56:49.771787   16230 config.go:182] Loaded profile config "addons-246638": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 17:56:49.771893   16230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246638
	I1009 17:56:49.789444   16230 main.go:141] libmachine: Using SSH client type: native
	I1009 17:56:49.789642   16230 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1009 17:56:49.789659   16230 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 17:56:50.045026   16230 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 17:56:50.045049   16230 machine.go:96] duration metric: took 1.437455098s to provisionDockerMachine
	I1009 17:56:50.045072   16230 client.go:171] duration metric: took 14.730658886s to LocalClient.Create
	I1009 17:56:50.045093   16230 start.go:167] duration metric: took 14.73072133s to libmachine.API.Create "addons-246638"
	I1009 17:56:50.045103   16230 start.go:293] postStartSetup for "addons-246638" (driver="docker")
	I1009 17:56:50.045114   16230 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 17:56:50.045190   16230 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 17:56:50.045227   16230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246638
	I1009 17:56:50.062376   16230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/addons-246638/id_rsa Username:docker}
	I1009 17:56:50.166265   16230 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 17:56:50.169787   16230 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 17:56:50.169812   16230 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 17:56:50.169823   16230 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 17:56:50.169880   16230 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 17:56:50.169913   16230 start.go:296] duration metric: took 124.804595ms for postStartSetup
	I1009 17:56:50.170226   16230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-246638
	I1009 17:56:50.188285   16230 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638/config.json ...
	I1009 17:56:50.188558   16230 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 17:56:50.188598   16230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246638
	I1009 17:56:50.206932   16230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/addons-246638/id_rsa Username:docker}
	I1009 17:56:50.306395   16230 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 17:56:50.311080   16230 start.go:128] duration metric: took 14.999030507s to createHost
	I1009 17:56:50.311108   16230 start.go:83] releasing machines lock for "addons-246638", held for 14.999196337s
	I1009 17:56:50.311194   16230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-246638
	I1009 17:56:50.328651   16230 ssh_runner.go:195] Run: cat /version.json
	I1009 17:56:50.328703   16230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246638
	I1009 17:56:50.328727   16230 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 17:56:50.328779   16230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-246638
	I1009 17:56:50.348594   16230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/addons-246638/id_rsa Username:docker}
	I1009 17:56:50.349052   16230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/addons-246638/id_rsa Username:docker}
	I1009 17:56:50.448229   16230 ssh_runner.go:195] Run: systemctl --version
	I1009 17:56:50.502907   16230 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 17:56:50.539933   16230 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 17:56:50.544887   16230 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 17:56:50.544945   16230 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 17:56:50.572597   16230 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 17:56:50.572627   16230 start.go:495] detecting cgroup driver to use...
	I1009 17:56:50.572663   16230 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 17:56:50.572712   16230 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 17:56:50.588365   16230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 17:56:50.600829   16230 docker.go:218] disabling cri-docker service (if available) ...
	I1009 17:56:50.600880   16230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 17:56:50.617651   16230 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 17:56:50.635594   16230 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 17:56:50.715504   16230 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 17:56:50.800714   16230 docker.go:234] disabling docker service ...
	I1009 17:56:50.800778   16230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 17:56:50.819712   16230 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 17:56:50.832909   16230 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 17:56:50.912923   16230 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 17:56:50.994414   16230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 17:56:51.007717   16230 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 17:56:51.022388   16230 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 17:56:51.022448   16230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 17:56:51.033240   16230 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 17:56:51.033299   16230 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 17:56:51.042534   16230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 17:56:51.051235   16230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 17:56:51.059819   16230 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 17:56:51.067611   16230 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 17:56:51.076153   16230 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 17:56:51.089072   16230 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 17:56:51.097489   16230 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 17:56:51.104649   16230 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 17:56:51.104698   16230 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 17:56:51.117047   16230 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 17:56:51.124513   16230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 17:56:51.201630   16230 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 17:56:51.302934   16230 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 17:56:51.303024   16230 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 17:56:51.306915   16230 start.go:563] Will wait 60s for crictl version
	I1009 17:56:51.306970   16230 ssh_runner.go:195] Run: which crictl
	I1009 17:56:51.310384   16230 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 17:56:51.334937   16230 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 17:56:51.335047   16230 ssh_runner.go:195] Run: crio --version
	I1009 17:56:51.362821   16230 ssh_runner.go:195] Run: crio --version
	I1009 17:56:51.393067   16230 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 17:56:51.394627   16230 cli_runner.go:164] Run: docker network inspect addons-246638 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 17:56:51.411677   16230 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 17:56:51.415897   16230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 17:56:51.426766   16230 kubeadm.go:883] updating cluster {Name:addons-246638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-246638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 17:56:51.426887   16230 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 17:56:51.426928   16230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 17:56:51.456316   16230 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 17:56:51.456337   16230 crio.go:433] Images already preloaded, skipping extraction
	I1009 17:56:51.456378   16230 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 17:56:51.480662   16230 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 17:56:51.480682   16230 cache_images.go:85] Images are preloaded, skipping loading
	I1009 17:56:51.480689   16230 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 17:56:51.480763   16230 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-246638 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-246638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 17:56:51.480818   16230 ssh_runner.go:195] Run: crio config
	I1009 17:56:51.526299   16230 cni.go:84] Creating CNI manager for ""
	I1009 17:56:51.526330   16230 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 17:56:51.526351   16230 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 17:56:51.526384   16230 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-246638 NodeName:addons-246638 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 17:56:51.526524   16230 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-246638"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 17:56:51.526592   16230 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 17:56:51.534525   16230 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 17:56:51.534588   16230 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 17:56:51.542041   16230 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1009 17:56:51.553812   16230 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 17:56:51.568423   16230 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1009 17:56:51.580584   16230 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 17:56:51.584091   16230 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 17:56:51.593418   16230 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 17:56:51.673241   16230 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 17:56:51.697522   16230 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638 for IP: 192.168.49.2
	I1009 17:56:51.697546   16230 certs.go:195] generating shared ca certs ...
	I1009 17:56:51.697560   16230 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 17:56:51.697684   16230 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 17:56:51.890802   16230 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt ...
	I1009 17:56:51.890831   16230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt: {Name:mk8bf228d7a5e755d1df27a58193ac6f659ad78a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 17:56:51.890996   16230 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key ...
	I1009 17:56:51.891007   16230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key: {Name:mkbc11b070d69571d630f37e403c2b398fb2547d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 17:56:51.891087   16230 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 17:56:51.992503   16230 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt ...
	I1009 17:56:51.992532   16230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt: {Name:mk2884bb16b41c6d1e62772d86d356a89b016350 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 17:56:51.992686   16230 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key ...
	I1009 17:56:51.992697   16230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key: {Name:mk31affd6ab1e61f7f73bbb8d072169cb85cddad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 17:56:51.992774   16230 certs.go:257] generating profile certs ...
	I1009 17:56:51.992824   16230 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638/client.key
	I1009 17:56:51.992837   16230 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638/client.crt with IP's: []
	I1009 17:56:52.165934   16230 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638/client.crt ...
	I1009 17:56:52.165959   16230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638/client.crt: {Name:mkcd79ccad97674c002fcbf332c0b47724afed9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 17:56:52.166111   16230 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638/client.key ...
	I1009 17:56:52.166121   16230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638/client.key: {Name:mkcdc3c61a19f5f5dd1d1d8d28fbaffe29bddd1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 17:56:52.166202   16230 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638/apiserver.key.3145e8c9
	I1009 17:56:52.166220   16230 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638/apiserver.crt.3145e8c9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1009 17:56:52.296904   16230 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638/apiserver.crt.3145e8c9 ...
	I1009 17:56:52.296934   16230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638/apiserver.crt.3145e8c9: {Name:mkdeee35c269f99597a980644c166fc20170d0aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 17:56:52.297093   16230 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638/apiserver.key.3145e8c9 ...
	I1009 17:56:52.297107   16230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638/apiserver.key.3145e8c9: {Name:mk36b2159f84700d6444b6d2c3c4df4aadbd6492 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 17:56:52.297194   16230 certs.go:382] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638/apiserver.crt.3145e8c9 -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638/apiserver.crt
	I1009 17:56:52.297287   16230 certs.go:386] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638/apiserver.key.3145e8c9 -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638/apiserver.key
	I1009 17:56:52.297350   16230 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638/proxy-client.key
	I1009 17:56:52.297369   16230 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638/proxy-client.crt with IP's: []
	I1009 17:56:52.442450   16230 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638/proxy-client.crt ...
	I1009 17:56:52.442481   16230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638/proxy-client.crt: {Name:mk5e1766ed27d50588af9500b52f431a80e9be0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 17:56:52.442648   16230 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638/proxy-client.key ...
	I1009 17:56:52.442660   16230 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638/proxy-client.key: {Name:mk409b3206ee2bc5b86eb5535e9726b23c0cb9c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 17:56:52.442833   16230 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 17:56:52.442868   16230 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 17:56:52.442891   16230 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 17:56:52.442913   16230 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 17:56:52.443490   16230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 17:56:52.462393   16230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 17:56:52.481268   16230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 17:56:52.498758   16230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 17:56:52.515086   16230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1009 17:56:52.532208   16230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 17:56:52.548399   16230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 17:56:52.564562   16230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/addons-246638/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 17:56:52.580624   16230 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 17:56:52.598407   16230 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 17:56:52.610527   16230 ssh_runner.go:195] Run: openssl version
	I1009 17:56:52.616231   16230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 17:56:52.626766   16230 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 17:56:52.630286   16230 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 17:56:52.630337   16230 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 17:56:52.664160   16230 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 17:56:52.672603   16230 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 17:56:52.676204   16230 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 17:56:52.676258   16230 kubeadm.go:400] StartCluster: {Name:addons-246638 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-246638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 17:56:52.676335   16230 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 17:56:52.676394   16230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 17:56:52.702050   16230 cri.go:89] found id: ""
	I1009 17:56:52.702110   16230 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 17:56:52.710047   16230 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 17:56:52.717492   16230 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 17:56:52.717545   16230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 17:56:52.724961   16230 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 17:56:52.724989   16230 kubeadm.go:157] found existing configuration files:
	
	I1009 17:56:52.725054   16230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 17:56:52.732214   16230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 17:56:52.732267   16230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 17:56:52.739167   16230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 17:56:52.746654   16230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 17:56:52.746712   16230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 17:56:52.754034   16230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 17:56:52.761419   16230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 17:56:52.761482   16230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 17:56:52.768694   16230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 17:56:52.775868   16230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 17:56:52.775960   16230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 17:56:52.782826   16230 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 17:56:52.817573   16230 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 17:56:52.817646   16230 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 17:56:52.837993   16230 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 17:56:52.838082   16230 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 17:56:52.838170   16230 kubeadm.go:318] OS: Linux
	I1009 17:56:52.838260   16230 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 17:56:52.838351   16230 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 17:56:52.838429   16230 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 17:56:52.838507   16230 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 17:56:52.838565   16230 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 17:56:52.838625   16230 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 17:56:52.838666   16230 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 17:56:52.838707   16230 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 17:56:52.892069   16230 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 17:56:52.892242   16230 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 17:56:52.892380   16230 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 17:56:52.899918   16230 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 17:56:52.902588   16230 out.go:252]   - Generating certificates and keys ...
	I1009 17:56:52.902711   16230 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 17:56:52.902822   16230 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 17:56:52.973065   16230 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 17:56:53.229564   16230 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 17:56:53.285187   16230 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 17:56:53.631094   16230 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 17:56:53.921686   16230 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 17:56:53.921836   16230 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-246638 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 17:56:54.003108   16230 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 17:56:54.003307   16230 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-246638 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 17:56:54.087479   16230 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 17:56:54.241652   16230 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 17:56:54.678491   16230 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 17:56:54.678579   16230 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 17:56:54.781656   16230 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 17:56:55.015174   16230 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 17:56:55.107535   16230 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 17:56:55.263431   16230 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 17:56:55.368030   16230 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 17:56:55.368437   16230 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 17:56:55.373525   16230 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 17:56:55.375977   16230 out.go:252]   - Booting up control plane ...
	I1009 17:56:55.376068   16230 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 17:56:55.376197   16230 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 17:56:55.376755   16230 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 17:56:55.392777   16230 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 17:56:55.392899   16230 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 17:56:55.399318   16230 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 17:56:55.399569   16230 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 17:56:55.399654   16230 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 17:56:55.491693   16230 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 17:56:55.491884   16230 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 17:56:56.493495   16230 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001915968s
	I1009 17:56:56.496500   16230 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 17:56:56.496630   16230 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 17:56:56.496779   16230 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 17:56:56.496897   16230 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:00:56.496808   16230 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000025885s
	I1009 18:00:56.496911   16230 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000116728s
	I1009 18:00:56.497065   16230 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000127012s
	I1009 18:00:56.497119   16230 kubeadm.go:318] 
	I1009 18:00:56.497289   16230 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:00:56.497429   16230 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:00:56.497570   16230 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:00:56.497747   16230 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:00:56.497854   16230 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:00:56.497969   16230 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:00:56.497980   16230 kubeadm.go:318] 
	I1009 18:00:56.500417   16230 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:00:56.500571   16230 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:00:56.501222   16230 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused]
	I1009 18:00:56.501356   16230 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1009 18:00:56.501523   16230 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [addons-246638 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [addons-246638 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001915968s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000025885s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000116728s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000127012s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [addons-246638 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [addons-246638 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001915968s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000025885s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000116728s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000127012s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 18:00:56.501607   16230 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 18:00:56.939043   16230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:00:56.951933   16230 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:00:56.951984   16230 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:00:56.960233   16230 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:00:56.960263   16230 kubeadm.go:157] found existing configuration files:
	
	I1009 18:00:56.960315   16230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:00:56.968129   16230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:00:56.968209   16230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:00:56.975926   16230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:00:56.983845   16230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:00:56.983906   16230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:00:56.991858   16230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:00:57.000125   16230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:00:57.000196   16230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:00:57.007841   16230 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:00:57.015610   16230 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:00:57.015661   16230 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:00:57.023187   16230 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:00:57.059606   16230 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:00:57.059696   16230 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:00:57.080390   16230 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:00:57.080472   16230 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:00:57.080519   16230 kubeadm.go:318] OS: Linux
	I1009 18:00:57.080589   16230 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:00:57.080659   16230 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:00:57.080720   16230 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:00:57.080790   16230 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:00:57.080857   16230 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:00:57.080940   16230 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:00:57.081008   16230 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:00:57.081048   16230 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:00:57.137439   16230 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:00:57.137628   16230 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:00:57.137773   16230 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:00:57.143898   16230 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:00:57.147326   16230 out.go:252]   - Generating certificates and keys ...
	I1009 18:00:57.147442   16230 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:00:57.147521   16230 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:00:57.147639   16230 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 18:00:57.147732   16230 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 18:00:57.147826   16230 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 18:00:57.147910   16230 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 18:00:57.147995   16230 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 18:00:57.148083   16230 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 18:00:57.148226   16230 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 18:00:57.148331   16230 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 18:00:57.148393   16230 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 18:00:57.148478   16230 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:00:57.353279   16230 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:00:57.627981   16230 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:00:57.872837   16230 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:00:58.166609   16230 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:00:58.295987   16230 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:00:58.296450   16230 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:00:58.299325   16230 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:00:58.301183   16230 out.go:252]   - Booting up control plane ...
	I1009 18:00:58.301263   16230 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:00:58.301367   16230 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:00:58.301428   16230 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:00:58.314818   16230 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:00:58.314964   16230 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:00:58.321422   16230 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:00:58.321700   16230 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:00:58.321744   16230 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:00:58.427300   16230 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:00:58.427429   16230 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:00:59.429013   16230 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001839874s
	I1009 18:00:59.432323   16230 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:00:59.432439   16230 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 18:00:59.432561   16230 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:00:59.432676   16230 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:04:59.432767   16230 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000299072s
	I1009 18:04:59.432881   16230 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000410569s
	I1009 18:04:59.432975   16230 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000364254s
	I1009 18:04:59.432985   16230 kubeadm.go:318] 
	I1009 18:04:59.433098   16230 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:04:59.433238   16230 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:04:59.433352   16230 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:04:59.433490   16230 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:04:59.433610   16230 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:04:59.433723   16230 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:04:59.433733   16230 kubeadm.go:318] 
	I1009 18:04:59.436002   16230 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:04:59.436182   16230 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:04:59.436824   16230 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 18:04:59.436916   16230 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:04:59.436986   16230 kubeadm.go:402] duration metric: took 8m6.76074079s to StartCluster
	I1009 18:04:59.437027   16230 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:04:59.437078   16230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:04:59.462479   16230 cri.go:89] found id: ""
	I1009 18:04:59.462515   16230 logs.go:282] 0 containers: []
	W1009 18:04:59.462526   16230 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:04:59.462534   16230 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:04:59.462598   16230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:04:59.487368   16230 cri.go:89] found id: ""
	I1009 18:04:59.487393   16230 logs.go:282] 0 containers: []
	W1009 18:04:59.487403   16230 logs.go:284] No container was found matching "etcd"
	I1009 18:04:59.487411   16230 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:04:59.487472   16230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:04:59.512643   16230 cri.go:89] found id: ""
	I1009 18:04:59.512668   16230 logs.go:282] 0 containers: []
	W1009 18:04:59.512679   16230 logs.go:284] No container was found matching "coredns"
	I1009 18:04:59.512687   16230 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:04:59.512736   16230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:04:59.537309   16230 cri.go:89] found id: ""
	I1009 18:04:59.537337   16230 logs.go:282] 0 containers: []
	W1009 18:04:59.537345   16230 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:04:59.537351   16230 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:04:59.537405   16230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:04:59.563372   16230 cri.go:89] found id: ""
	I1009 18:04:59.563398   16230 logs.go:282] 0 containers: []
	W1009 18:04:59.563406   16230 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:04:59.563412   16230 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:04:59.563458   16230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:04:59.588644   16230 cri.go:89] found id: ""
	I1009 18:04:59.588670   16230 logs.go:282] 0 containers: []
	W1009 18:04:59.588680   16230 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:04:59.588688   16230 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:04:59.588746   16230 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:04:59.614989   16230 cri.go:89] found id: ""
	I1009 18:04:59.615016   16230 logs.go:282] 0 containers: []
	W1009 18:04:59.615024   16230 logs.go:284] No container was found matching "kindnet"
	I1009 18:04:59.615035   16230 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:04:59.615073   16230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:04:59.678159   16230 logs.go:123] Gathering logs for container status ...
	I1009 18:04:59.678197   16230 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:04:59.706825   16230 logs.go:123] Gathering logs for kubelet ...
	I1009 18:04:59.706849   16230 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:04:59.774102   16230 logs.go:123] Gathering logs for dmesg ...
	I1009 18:04:59.774150   16230 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:04:59.786157   16230 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:04:59.786191   16230 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:04:59.843039   16230 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:04:59.836233    2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:04:59.836739    2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:04:59.838249    2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:04:59.838703    2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:04:59.840223    2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:04:59.836233    2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:04:59.836739    2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:04:59.838249    2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:04:59.838703    2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:04:59.840223    2389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1009 18:04:59.843071   16230 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001839874s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000299072s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000410569s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000364254s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 18:04:59.843109   16230 out.go:285] * 
	* 
	W1009 18:04:59.843202   16230 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001839874s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000299072s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000410569s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000364254s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001839874s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000299072s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000410569s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000364254s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:04:59.843223   16230 out.go:285] * 
	* 
	W1009 18:04:59.844926   16230 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:04:59.849256   16230 out.go:203] 
	W1009 18:04:59.850926   16230 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001839874s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000299072s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000410569s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000364254s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001839874s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000299072s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000410569s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000364254s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:04:59.850983   16230 out.go:285] * 
	* 
	I1009 18:04:59.853283   16230 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:110: out/minikube-linux-amd64 start -p addons-246638 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher failed: exit status 80
--- FAIL: TestAddons/Setup (517.58s)

                                                
                                    
x
+
TestErrorSpam/setup (500.82s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-663194 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-663194 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p nospam-663194 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-663194 --driver=docker  --container-runtime=crio: exit status 80 (8m20.80742336s)

                                                
                                                
-- stdout --
	* [nospam-663194] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "nospam-663194" primary control-plane node in "nospam-663194" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost nospam-663194] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost nospam-663194] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000985906s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000083054s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000252059s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000351428s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501811298s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001117492s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001342973s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001458047s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501811298s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001117492s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001342973s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001458047s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-linux-amd64 start -p nospam-663194 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-663194 --driver=docker  --container-runtime=crio" failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "[init] Using Kubernetes version: v1.34.1"
error_spam_test.go:96: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:96: unexpected stderr: "[preflight] The system verification failed. Printing the output from the verification:"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mKERNEL_VERSION\x1b[0m: \x1b[0;32m6.8.0-1041-gcp\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mOS\x1b[0m: \x1b[0;32mLinux\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPU\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPUSET\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_DEVICES\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_FREEZER\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_MEMORY\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_PIDS\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_HUGETLB\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_IO\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:96: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:96: unexpected stderr: "[preflight] You can also perform this action beforehand using 'kubeadm config images pull'"
error_spam_test.go:96: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:96: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"apiserver-kubelet-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"front-proxy-ca\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"front-proxy-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/ca\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/server\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] etcd/server serving cert is signed for DNS names [localhost nospam-663194] and IPs [192.168.49.2 127.0.0.1 ::1]"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/peer\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] etcd/peer serving cert is signed for DNS names [localhost nospam-663194] and IPs [192.168.49.2 127.0.0.1 ::1]"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/healthcheck-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"apiserver-etcd-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"sa\" key and public key"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"super-admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\""
error_spam_test.go:96: unexpected stderr: "[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:96: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[kubelet-check] The kubelet is healthy after 1.000985906s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-scheduler is not healthy after 4m0.000083054s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-apiserver is not healthy after 4m0.000252059s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-controller-manager is not healthy after 4m0.000351428s"
error_spam_test.go:96: unexpected stderr: "A control plane component may have crashed or exited when started by the container runtime."
error_spam_test.go:96: unexpected stderr: "To troubleshoot, list all containers using your preferred container runtimes CLI."
error_spam_test.go:96: unexpected stderr: "Here is one example how you may list all running Kubernetes containers by using crictl:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'"
error_spam_test.go:96: unexpected stderr: "\tOnce you have found the failing container, you can inspect its logs with:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1"
error_spam_test.go:96: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:96: unexpected stderr: "error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]"
error_spam_test.go:96: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "X Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "[init] Using Kubernetes version: v1.34.1"
error_spam_test.go:96: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:96: unexpected stderr: "[preflight] The system verification failed. Printing the output from the verification:"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mKERNEL_VERSION\x1b[0m: \x1b[0;32m6.8.0-1041-gcp\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mOS\x1b[0m: \x1b[0;32mLinux\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPU\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPUSET\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_DEVICES\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_FREEZER\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_MEMORY\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_PIDS\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_HUGETLB\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_IO\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:96: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:96: unexpected stderr: "[preflight] You can also perform this action beforehand using 'kubeadm config images pull'"
error_spam_test.go:96: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:96: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-kubelet-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/server certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/peer certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/healthcheck-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-etcd-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using the existing \"sa\" key"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"super-admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\""
error_spam_test.go:96: unexpected stderr: "[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:96: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[kubelet-check] The kubelet is healthy after 1.501811298s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-apiserver is not healthy after 4m0.001117492s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-controller-manager is not healthy after 4m0.001342973s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-scheduler is not healthy after 4m0.001458047s"
error_spam_test.go:96: unexpected stderr: "A control plane component may have crashed or exited when started by the container runtime."
error_spam_test.go:96: unexpected stderr: "To troubleshoot, list all containers using your preferred container runtimes CLI."
error_spam_test.go:96: unexpected stderr: "Here is one example how you may list all running Kubernetes containers by using crictl:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'"
error_spam_test.go:96: unexpected stderr: "\tOnce you have found the failing container, you can inspect its logs with:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1"
error_spam_test.go:96: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:96: unexpected stderr: "error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]"
error_spam_test.go:96: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "[init] Using Kubernetes version: v1.34.1"
error_spam_test.go:96: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:96: unexpected stderr: "[preflight] The system verification failed. Printing the output from the verification:"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mKERNEL_VERSION\x1b[0m: \x1b[0;32m6.8.0-1041-gcp\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mOS\x1b[0m: \x1b[0;32mLinux\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPU\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPUSET\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_DEVICES\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_FREEZER\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_MEMORY\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_PIDS\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_HUGETLB\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_IO\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:96: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:96: unexpected stderr: "[preflight] You can also perform this action beforehand using 'kubeadm config images pull'"
error_spam_test.go:96: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:96: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-kubelet-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/server certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/peer certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/healthcheck-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-etcd-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using the existing \"sa\" key"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"super-admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\""
error_spam_test.go:96: unexpected stderr: "[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:96: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[kubelet-check] The kubelet is healthy after 1.501811298s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-apiserver is not healthy after 4m0.001117492s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-controller-manager is not healthy after 4m0.001342973s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-scheduler is not healthy after 4m0.001458047s"
error_spam_test.go:96: unexpected stderr: "A control plane component may have crashed or exited when started by the container runtime."
error_spam_test.go:96: unexpected stderr: "To troubleshoot, list all containers using your preferred container runtimes CLI."
error_spam_test.go:96: unexpected stderr: "Here is one example how you may list all running Kubernetes containers by using crictl:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'"
error_spam_test.go:96: unexpected stderr: "\tOnce you have found the failing container, you can inspect its logs with:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1"
error_spam_test.go:96: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:96: unexpected stderr: "error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]"
error_spam_test.go:96: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:110: minikube stdout:
* [nospam-663194] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
- MINIKUBE_LOCATION=21139
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "nospam-663194" primary control-plane node in "nospam-663194" cluster
* Pulling base image v0.0.48-1759745255-21703 ...
* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 6.8.0-1041-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_IO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost nospam-663194] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost nospam-663194] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.000985906s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-scheduler is not healthy after 4m0.000083054s
[control-plane-check] kube-apiserver is not healthy after 4m0.000252059s
[control-plane-check] kube-controller-manager is not healthy after 4m0.000351428s

                                                
                                                
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'

                                                
                                                

                                                
                                                
stderr:
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
* 
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 6.8.0-1041-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_IO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.501811298s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.001117492s
[control-plane-check] kube-controller-manager is not healthy after 4m0.001342973s
[control-plane-check] kube-scheduler is not healthy after 4m0.001458047s

                                                
                                                
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'

                                                
                                                

                                                
                                                
stderr:
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 6.8.0-1041-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_IO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.501811298s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.001117492s
[control-plane-check] kube-controller-manager is not healthy after 4m0.001342973s
[control-plane-check] kube-scheduler is not healthy after 4m0.001458047s

                                                
                                                
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'

                                                
                                                

                                                
                                                
stderr:
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
* 
--- FAIL: TestErrorSpam/setup (500.82s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (501.89s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-753440 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-753440 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: exit status 80 (8m20.605526493s)

                                                
                                                
-- stdout --
	* [functional-753440] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-753440" primary control-plane node in "functional-753440" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Found network options:
	  - HTTP_PROXY=localhost:34619
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:34619 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-753440 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-753440 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501243038s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000056433s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000047358s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000038065s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.620488ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000383935s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000621834s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00069997s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.620488ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000383935s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000621834s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00069997s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-amd64 start -p functional-753440 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753440
helpers_test.go:243: (dbg) docker inspect functional-753440:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205",
	        "Created": "2025-10-09T18:13:38.612842612Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 29511,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:13:38.64668907Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/hostname",
	        "HostsPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/hosts",
	        "LogPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205-json.log",
	        "Name": "/functional-753440",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753440:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-753440",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205",
	                "LowerDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753440",
	                "Source": "/var/lib/docker/volumes/functional-753440/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753440",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753440",
	                "name.minikube.sigs.k8s.io": "functional-753440",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d81e656cb7fd298b6be7b84ddafb7e6d0b2df1b9904e1c444b24eb780385409d",
	            "SandboxKey": "/var/run/docker/netns/d81e656cb7fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753440": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:52:a9:f3:ce:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d69cee380b2506f35d197ee18a95b90b110e191b547e1220873c5484ffc92ad3",
	                    "EndpointID": "2f780bc31b7359d4036c8b32e09c7f7657923ca8c46e8392506706282465c3ec",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753440",
	                        "694bf539948e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-753440 -n functional-753440
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-753440 -n functional-753440: exit status 6 (291.170349ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:21:54.382196   34315 status.go:458] kubeconfig endpoint: get endpoint: "functional-753440" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 logs -n 25
helpers_test.go:260: TestFunctional/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-837534                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-837534   │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │ 09 Oct 25 17:56 UTC │
	│ delete  │ -p download-only-240600                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-240600   │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │ 09 Oct 25 17:56 UTC │
	│ start   │ --download-only -p download-docker-360662 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-360662 │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │                     │
	│ delete  │ -p download-docker-360662                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-360662 │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │ 09 Oct 25 17:56 UTC │
	│ start   │ --download-only -p binary-mirror-609906 --alsologtostderr --binary-mirror http://127.0.0.1:44531 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-609906   │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │                     │
	│ delete  │ -p binary-mirror-609906                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-609906   │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │ 09 Oct 25 17:56 UTC │
	│ addons  │ enable dashboard -p addons-246638                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-246638          │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │                     │
	│ addons  │ disable dashboard -p addons-246638                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-246638          │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │                     │
	│ start   │ -p addons-246638 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-246638          │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │                     │
	│ delete  │ -p addons-246638                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-246638          │ jenkins │ v1.37.0 │ 09 Oct 25 18:04 UTC │ 09 Oct 25 18:05 UTC │
	│ start   │ -p nospam-663194 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-663194 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                  │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:05 UTC │                     │
	│ start   │ nospam-663194 --log_dir /tmp/nospam-663194 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │                     │
	│ start   │ nospam-663194 --log_dir /tmp/nospam-663194 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │                     │
	│ start   │ nospam-663194 --log_dir /tmp/nospam-663194 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │                     │
	│ pause   │ nospam-663194 --log_dir /tmp/nospam-663194 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ pause   │ nospam-663194 --log_dir /tmp/nospam-663194 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ pause   │ nospam-663194 --log_dir /tmp/nospam-663194 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ unpause │ nospam-663194 --log_dir /tmp/nospam-663194 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ unpause │ nospam-663194 --log_dir /tmp/nospam-663194 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ unpause │ nospam-663194 --log_dir /tmp/nospam-663194 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ stop    │ nospam-663194 --log_dir /tmp/nospam-663194 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ stop    │ nospam-663194 --log_dir /tmp/nospam-663194 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ stop    │ nospam-663194 --log_dir /tmp/nospam-663194 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ delete  │ -p nospam-663194                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ start   │ -p functional-753440 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                            │ functional-753440      │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:13:33
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:13:33.518934   28938 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:13:33.519156   28938 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:13:33.519161   28938 out.go:374] Setting ErrFile to fd 2...
	I1009 18:13:33.519165   28938 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:13:33.519357   28938 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:13:33.519849   28938 out.go:368] Setting JSON to false
	I1009 18:13:33.520690   28938 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3362,"bootTime":1760030252,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:13:33.520762   28938 start.go:141] virtualization: kvm guest
	I1009 18:13:33.523228   28938 out.go:179] * [functional-753440] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:13:33.524747   28938 notify.go:220] Checking for updates...
	I1009 18:13:33.524756   28938 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:13:33.526311   28938 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:13:33.527793   28938 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:13:33.529090   28938 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:13:33.530384   28938 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:13:33.531849   28938 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:13:33.533366   28938 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:13:33.560058   28938 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:13:33.560180   28938 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:13:33.617638   28938 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:13:33.607215572 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:13:33.617732   28938 docker.go:318] overlay module found
	I1009 18:13:33.619669   28938 out.go:179] * Using the docker driver based on user configuration
	I1009 18:13:33.620900   28938 start.go:305] selected driver: docker
	I1009 18:13:33.620909   28938 start.go:925] validating driver "docker" against <nil>
	I1009 18:13:33.620918   28938 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:13:33.621497   28938 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:13:33.675561   28938 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:13:33.66619132 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:13:33.675681   28938 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 18:13:33.675891   28938 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:13:33.677601   28938 out.go:179] * Using Docker driver with root privileges
	I1009 18:13:33.678869   28938 cni.go:84] Creating CNI manager for ""
	I1009 18:13:33.678916   28938 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:13:33.678921   28938 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:13:33.678977   28938 start.go:349] cluster config:
	{Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:13:33.680392   28938 out.go:179] * Starting "functional-753440" primary control-plane node in "functional-753440" cluster
	I1009 18:13:33.681650   28938 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:13:33.683088   28938 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:13:33.684380   28938 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:13:33.684403   28938 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:13:33.684409   28938 cache.go:64] Caching tarball of preloaded images
	I1009 18:13:33.684480   28938 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:13:33.684475   28938 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:13:33.684485   28938 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:13:33.684761   28938 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/config.json ...
	I1009 18:13:33.684775   28938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/config.json: {Name:mka84c57eee5eac89637ffbc91b5ccc953f15847 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:13:33.706313   28938 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 18:13:33.706326   28938 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 18:13:33.706351   28938 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:13:33.706379   28938 start.go:360] acquireMachinesLock for functional-753440: {Name:mka6dd10318522f9d68a16550e4b04812fa22004 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:13:33.706487   28938 start.go:364] duration metric: took 92.57µs to acquireMachinesLock for "functional-753440"
	I1009 18:13:33.706509   28938 start.go:93] Provisioning new machine with config: &{Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:13:33.706590   28938 start.go:125] createHost starting for "" (driver="docker")
	I1009 18:13:33.708784   28938 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1009 18:13:33.709018   28938 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:34619 to docker env.
	I1009 18:13:33.709038   28938 start.go:159] libmachine.API.Create for "functional-753440" (driver="docker")
	I1009 18:13:33.709053   28938 client.go:168] LocalClient.Create starting
	I1009 18:13:33.709103   28938 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem
	I1009 18:13:33.709129   28938 main.go:141] libmachine: Decoding PEM data...
	I1009 18:13:33.709155   28938 main.go:141] libmachine: Parsing certificate...
	I1009 18:13:33.709218   28938 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem
	I1009 18:13:33.709236   28938 main.go:141] libmachine: Decoding PEM data...
	I1009 18:13:33.709246   28938 main.go:141] libmachine: Parsing certificate...
	I1009 18:13:33.709584   28938 cli_runner.go:164] Run: docker network inspect functional-753440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:13:33.726651   28938 cli_runner.go:211] docker network inspect functional-753440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:13:33.726708   28938 network_create.go:284] running [docker network inspect functional-753440] to gather additional debugging logs...
	I1009 18:13:33.726719   28938 cli_runner.go:164] Run: docker network inspect functional-753440
	W1009 18:13:33.744576   28938 cli_runner.go:211] docker network inspect functional-753440 returned with exit code 1
	I1009 18:13:33.744603   28938 network_create.go:287] error running [docker network inspect functional-753440]: docker network inspect functional-753440: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-753440 not found
	I1009 18:13:33.744615   28938 network_create.go:289] output of [docker network inspect functional-753440]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-753440 not found
	
	** /stderr **
	I1009 18:13:33.744753   28938 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:13:33.762987   28938 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001680900}
	I1009 18:13:33.763025   28938 network_create.go:124] attempt to create docker network functional-753440 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 18:13:33.763063   28938 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-753440 functional-753440
	I1009 18:13:33.819194   28938 network_create.go:108] docker network functional-753440 192.168.49.0/24 created
	I1009 18:13:33.819213   28938 kic.go:121] calculated static IP "192.168.49.2" for the "functional-753440" container
	I1009 18:13:33.819262   28938 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:13:33.835891   28938 cli_runner.go:164] Run: docker volume create functional-753440 --label name.minikube.sigs.k8s.io=functional-753440 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:13:33.854208   28938 oci.go:103] Successfully created a docker volume functional-753440
	I1009 18:13:33.854273   28938 cli_runner.go:164] Run: docker run --rm --name functional-753440-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-753440 --entrypoint /usr/bin/test -v functional-753440:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 18:13:34.231993   28938 oci.go:107] Successfully prepared a docker volume functional-753440
	I1009 18:13:34.232055   28938 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:13:34.232066   28938 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 18:13:34.232147   28938 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v functional-753440:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 18:13:38.544249   28938 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v functional-753440:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.312063323s)
	I1009 18:13:38.544268   28938 kic.go:203] duration metric: took 4.312199594s to extract preloaded images to volume ...
	W1009 18:13:38.544367   28938 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 18:13:38.544405   28938 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 18:13:38.544435   28938 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:13:38.596047   28938 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-753440 --name functional-753440 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-753440 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-753440 --network functional-753440 --ip 192.168.49.2 --volume functional-753440:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 18:13:38.867110   28938 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Running}}
	I1009 18:13:38.886636   28938 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
	I1009 18:13:38.906731   28938 cli_runner.go:164] Run: docker exec functional-753440 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:13:38.956554   28938 oci.go:144] the created container "functional-753440" has a running status.
	I1009 18:13:38.956586   28938 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa...
	I1009 18:13:39.059678   28938 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:13:39.083878   28938 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
	I1009 18:13:39.104051   28938 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:13:39.104076   28938 kic_runner.go:114] Args: [docker exec --privileged functional-753440 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:13:39.149269   28938 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
	I1009 18:13:39.171947   28938 machine.go:93] provisionDockerMachine start ...
	I1009 18:13:39.172046   28938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:13:39.195114   28938 main.go:141] libmachine: Using SSH client type: native
	I1009 18:13:39.195453   28938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:13:39.195464   28938 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:13:39.196044   28938 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37294->127.0.0.1:32778: read: connection reset by peer
	I1009 18:13:42.342273   28938 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753440
	
	I1009 18:13:42.342291   28938 ubuntu.go:182] provisioning hostname "functional-753440"
	I1009 18:13:42.342341   28938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:13:42.360400   28938 main.go:141] libmachine: Using SSH client type: native
	I1009 18:13:42.360614   28938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:13:42.360622   28938 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-753440 && echo "functional-753440" | sudo tee /etc/hostname
	I1009 18:13:42.516004   28938 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753440
	
	I1009 18:13:42.516085   28938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:13:42.534638   28938 main.go:141] libmachine: Using SSH client type: native
	I1009 18:13:42.534850   28938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:13:42.534861   28938 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753440' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753440/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753440' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:13:42.680183   28938 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:13:42.680198   28938 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 18:13:42.680222   28938 ubuntu.go:190] setting up certificates
	I1009 18:13:42.680231   28938 provision.go:84] configureAuth start
	I1009 18:13:42.680278   28938 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753440
	I1009 18:13:42.697858   28938 provision.go:143] copyHostCerts
	I1009 18:13:42.697921   28938 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 18:13:42.697929   28938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:13:42.698019   28938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 18:13:42.698146   28938 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 18:13:42.698153   28938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:13:42.698195   28938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 18:13:42.698299   28938 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 18:13:42.698304   28938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:13:42.698337   28938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 18:13:42.698427   28938 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.functional-753440 san=[127.0.0.1 192.168.49.2 functional-753440 localhost minikube]
	I1009 18:13:43.025111   28938 provision.go:177] copyRemoteCerts
	I1009 18:13:43.025186   28938 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:13:43.025218   28938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:13:43.043033   28938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:13:43.145656   28938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 18:13:43.165785   28938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 18:13:43.183556   28938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:13:43.201592   28938 provision.go:87] duration metric: took 521.344744ms to configureAuth
	I1009 18:13:43.201614   28938 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:13:43.201827   28938 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:13:43.201934   28938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:13:43.219308   28938 main.go:141] libmachine: Using SSH client type: native
	I1009 18:13:43.219508   28938 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:13:43.219518   28938 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:13:43.474063   28938 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:13:43.474080   28938 machine.go:96] duration metric: took 4.302118801s to provisionDockerMachine
	I1009 18:13:43.474090   28938 client.go:171] duration metric: took 9.76503292s to LocalClient.Create
	I1009 18:13:43.474115   28938 start.go:167] duration metric: took 9.765076013s to libmachine.API.Create "functional-753440"
	I1009 18:13:43.474121   28938 start.go:293] postStartSetup for "functional-753440" (driver="docker")
	I1009 18:13:43.474131   28938 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:13:43.474303   28938 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:13:43.474337   28938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:13:43.492434   28938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:13:43.595673   28938 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:13:43.599169   28938 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:13:43.599183   28938 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:13:43.599192   28938 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 18:13:43.599245   28938 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 18:13:43.599354   28938 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 18:13:43.599459   28938 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/test/nested/copy/14880/hosts -> hosts in /etc/test/nested/copy/14880
	I1009 18:13:43.599498   28938 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/14880
	I1009 18:13:43.607262   28938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:13:43.627112   28938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/test/nested/copy/14880/hosts --> /etc/test/nested/copy/14880/hosts (40 bytes)
	I1009 18:13:43.644833   28938 start.go:296] duration metric: took 170.698223ms for postStartSetup
	I1009 18:13:43.645207   28938 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753440
	I1009 18:13:43.662317   28938 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/config.json ...
	I1009 18:13:43.662554   28938 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:13:43.662583   28938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:13:43.679484   28938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:13:43.779306   28938 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:13:43.783743   28938 start.go:128] duration metric: took 10.077140581s to createHost
	I1009 18:13:43.783759   28938 start.go:83] releasing machines lock for "functional-753440", held for 10.077264644s
	I1009 18:13:43.783815   28938 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753440
	I1009 18:13:43.804509   28938 out.go:179] * Found network options:
	I1009 18:13:43.805974   28938 out.go:179]   - HTTP_PROXY=localhost:34619
	W1009 18:13:43.807377   28938 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1009 18:13:43.808838   28938 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1009 18:13:43.810147   28938 ssh_runner.go:195] Run: cat /version.json
	I1009 18:13:43.810172   28938 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:13:43.810188   28938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:13:43.810217   28938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:13:43.828912   28938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:13:43.829492   28938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:13:43.982208   28938 ssh_runner.go:195] Run: systemctl --version
	I1009 18:13:43.988485   28938 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:13:44.022420   28938 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:13:44.026952   28938 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:13:44.027014   28938 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:13:44.052378   28938 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 18:13:44.052391   28938 start.go:495] detecting cgroup driver to use...
	I1009 18:13:44.052418   28938 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:13:44.052466   28938 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:13:44.068065   28938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:13:44.079779   28938 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:13:44.079817   28938 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:13:44.095816   28938 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:13:44.112298   28938 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:13:44.193252   28938 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:13:44.276658   28938 docker.go:234] disabling docker service ...
	I1009 18:13:44.276705   28938 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:13:44.294899   28938 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:13:44.307738   28938 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:13:44.388763   28938 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:13:44.463895   28938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:13:44.476021   28938 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:13:44.489426   28938 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:13:44.489478   28938 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:13:44.499500   28938 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 18:13:44.499551   28938 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:13:44.508496   28938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:13:44.517193   28938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:13:44.525624   28938 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:13:44.533552   28938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:13:44.542091   28938 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:13:44.555521   28938 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:13:44.564299   28938 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:13:44.571358   28938 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:13:44.578555   28938 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:13:44.655997   28938 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:13:44.759316   28938 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:13:44.759363   28938 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:13:44.763255   28938 start.go:563] Will wait 60s for crictl version
	I1009 18:13:44.763299   28938 ssh_runner.go:195] Run: which crictl
	I1009 18:13:44.766901   28938 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:13:44.790364   28938 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:13:44.790426   28938 ssh_runner.go:195] Run: crio --version
	I1009 18:13:44.817449   28938 ssh_runner.go:195] Run: crio --version
	I1009 18:13:44.847067   28938 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:13:44.848397   28938 cli_runner.go:164] Run: docker network inspect functional-753440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:13:44.865891   28938 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:13:44.869926   28938 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:13:44.880344   28938 kubeadm.go:883] updating cluster {Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:13:44.880489   28938 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:13:44.880534   28938 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:13:44.912309   28938 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:13:44.912320   28938 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:13:44.912362   28938 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:13:44.939300   28938 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:13:44.939319   28938 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:13:44.939327   28938 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1009 18:13:44.939405   28938 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-753440 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:13:44.939460   28938 ssh_runner.go:195] Run: crio config
	I1009 18:13:44.982635   28938 cni.go:84] Creating CNI manager for ""
	I1009 18:13:44.982652   28938 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:13:44.982672   28938 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:13:44.982701   28938 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753440 NodeName:functional-753440 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:13:44.982840   28938 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-753440"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:13:44.982902   28938 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:13:44.991165   28938 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:13:44.991222   28938 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:13:44.999084   28938 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 18:13:45.011445   28938 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:13:45.026833   28938 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1009 18:13:45.039433   28938 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 18:13:45.042965   28938 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:13:45.052832   28938 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:13:45.128767   28938 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:13:45.155472   28938 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440 for IP: 192.168.49.2
	I1009 18:13:45.155485   28938 certs.go:195] generating shared ca certs ...
	I1009 18:13:45.155502   28938 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:13:45.155656   28938 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 18:13:45.155696   28938 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 18:13:45.155704   28938 certs.go:257] generating profile certs ...
	I1009 18:13:45.155764   28938 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.key
	I1009 18:13:45.155774   28938 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt with IP's: []
	I1009 18:13:45.421098   28938 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt ...
	I1009 18:13:45.421113   28938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: {Name:mkb0ba862d91b75d55c430c73e1c5bcc16fcff3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:13:45.421286   28938 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.key ...
	I1009 18:13:45.421293   28938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.key: {Name:mk95dd8884ccfb14c0702cdee5340602c311249a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:13:45.421365   28938 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.key.01289d3a
	I1009 18:13:45.421374   28938 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.crt.01289d3a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1009 18:13:45.551863   28938 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.crt.01289d3a ...
	I1009 18:13:45.551877   28938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.crt.01289d3a: {Name:mkc45080b4459acfb4facd578d065670f25ce8bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:13:45.552036   28938 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.key.01289d3a ...
	I1009 18:13:45.552043   28938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.key.01289d3a: {Name:mke0ecbe63619c782480690272a676adda38c358 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:13:45.552107   28938 certs.go:382] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.crt.01289d3a -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.crt
	I1009 18:13:45.552194   28938 certs.go:386] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.key.01289d3a -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.key
	I1009 18:13:45.552243   28938 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.key
	I1009 18:13:45.552253   28938 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.crt with IP's: []
	I1009 18:13:45.709784   28938 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.crt ...
	I1009 18:13:45.709798   28938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.crt: {Name:mkfe2bab9c69bd167662b9dd2c293220fb9feda6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:13:45.709960   28938 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.key ...
	I1009 18:13:45.709967   28938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.key: {Name:mk3d100d6b4c674e25246f4802072c8f7aaf8e97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:13:45.710168   28938 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 18:13:45.710199   28938 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 18:13:45.710205   28938 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:13:45.710228   28938 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 18:13:45.710246   28938 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:13:45.710262   28938 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 18:13:45.710296   28938 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:13:45.710794   28938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:13:45.728891   28938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:13:45.746149   28938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:13:45.763549   28938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:13:45.781297   28938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 18:13:45.798893   28938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:13:45.816268   28938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:13:45.833282   28938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:13:45.850774   28938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:13:45.869693   28938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 18:13:45.887805   28938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 18:13:45.906545   28938 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:13:45.918931   28938 ssh_runner.go:195] Run: openssl version
	I1009 18:13:45.924915   28938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:13:45.933182   28938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:13:45.936834   28938 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:13:45.936879   28938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:13:45.970746   28938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:13:45.979405   28938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 18:13:45.987773   28938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 18:13:45.991390   28938 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:13:45.991435   28938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 18:13:46.024910   28938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 18:13:46.033676   28938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 18:13:46.041976   28938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 18:13:46.045711   28938 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:13:46.045765   28938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 18:13:46.079838   28938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:13:46.088737   28938 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:13:46.092227   28938 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 18:13:46.092265   28938 kubeadm.go:400] StartCluster: {Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:13:46.092338   28938 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:13:46.092384   28938 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:13:46.118512   28938 cri.go:89] found id: ""
	I1009 18:13:46.118568   28938 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:13:46.126417   28938 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:13:46.134290   28938 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:13:46.134329   28938 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:13:46.141779   28938 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:13:46.141794   28938 kubeadm.go:157] found existing configuration files:
	
	I1009 18:13:46.141835   28938 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1009 18:13:46.149230   28938 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:13:46.149272   28938 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:13:46.156664   28938 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1009 18:13:46.164131   28938 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:13:46.164191   28938 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:13:46.171436   28938 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1009 18:13:46.179233   28938 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:13:46.179287   28938 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:13:46.186852   28938 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1009 18:13:46.194879   28938 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:13:46.194933   28938 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:13:46.202379   28938 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:13:46.240540   28938 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:13:46.240609   28938 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:13:46.261427   28938 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:13:46.261510   28938 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:13:46.261544   28938 kubeadm.go:318] OS: Linux
	I1009 18:13:46.261622   28938 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:13:46.261683   28938 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:13:46.261745   28938 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:13:46.261814   28938 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:13:46.261878   28938 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:13:46.261950   28938 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:13:46.262016   28938 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:13:46.262052   28938 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:13:46.318126   28938 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:13:46.318272   28938 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:13:46.318392   28938 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:13:46.325867   28938 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:13:46.328178   28938 out.go:252]   - Generating certificates and keys ...
	I1009 18:13:46.328291   28938 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:13:46.328417   28938 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:13:46.347994   28938 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 18:13:46.425915   28938 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 18:13:46.737398   28938 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 18:13:47.174121   28938 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 18:13:47.249107   28938 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 18:13:47.249274   28938 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [functional-753440 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:13:47.345399   28938 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 18:13:47.345551   28938 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [functional-753440 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:13:47.715032   28938 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 18:13:47.862303   28938 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 18:13:47.902694   28938 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 18:13:47.902761   28938 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:13:48.053753   28938 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:13:48.607654   28938 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:13:49.195685   28938 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:13:49.439103   28938 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:13:49.756797   28938 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:13:49.757447   28938 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:13:49.761181   28938 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:13:49.763943   28938 out.go:252]   - Booting up control plane ...
	I1009 18:13:49.764065   28938 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:13:49.764197   28938 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:13:49.764295   28938 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:13:49.777218   28938 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:13:49.777354   28938 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:13:49.784012   28938 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:13:49.784257   28938 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:13:49.784301   28938 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:13:49.875126   28938 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:13:49.875290   28938 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:13:51.376213   28938 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501243038s
	I1009 18:13:51.379177   28938 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:13:51.379308   28938 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1009 18:13:51.379440   28938 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:13:51.379544   28938 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:17:51.379533   28938 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000056433s
	I1009 18:17:51.379716   28938 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000047358s
	I1009 18:17:51.379871   28938 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000038065s
	I1009 18:17:51.379897   28938 kubeadm.go:318] 
	I1009 18:17:51.380061   28938 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:17:51.380219   28938 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:17:51.380343   28938 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:17:51.380492   28938 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:17:51.380645   28938 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:17:51.380716   28938 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:17:51.380720   28938 kubeadm.go:318] 
	I1009 18:17:51.383473   28938 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:17:51.383564   28938 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:17:51.384172   28938 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 18:17:51.384271   28938 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1009 18:17:51.384426   28938 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-753440 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-753440 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501243038s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000056433s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000047358s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000038065s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 18:17:51.384519   28938 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 18:17:51.828043   28938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:17:51.840368   28938 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:17:51.840406   28938 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:17:51.848182   28938 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:17:51.848190   28938 kubeadm.go:157] found existing configuration files:
	
	I1009 18:17:51.848224   28938 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1009 18:17:51.855638   28938 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:17:51.855676   28938 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:17:51.862889   28938 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1009 18:17:51.870119   28938 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:17:51.870175   28938 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:17:51.877006   28938 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1009 18:17:51.884385   28938 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:17:51.884420   28938 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:17:51.891723   28938 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1009 18:17:51.898962   28938 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:17:51.899001   28938 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:17:51.906295   28938 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:17:51.940597   28938 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:17:51.940641   28938 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:17:51.960281   28938 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:17:51.960360   28938 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:17:51.960396   28938 kubeadm.go:318] OS: Linux
	I1009 18:17:51.960450   28938 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:17:51.960513   28938 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:17:51.960576   28938 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:17:51.960660   28938 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:17:51.960706   28938 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:17:51.960757   28938 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:17:51.960796   28938 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:17:51.960837   28938 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:17:52.017821   28938 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:17:52.017964   28938 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:17:52.018083   28938 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:17:52.025164   28938 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:17:52.029147   28938 out.go:252]   - Generating certificates and keys ...
	I1009 18:17:52.029242   28938 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:17:52.029294   28938 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:17:52.029363   28938 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 18:17:52.029420   28938 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 18:17:52.029474   28938 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 18:17:52.029543   28938 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 18:17:52.029607   28938 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 18:17:52.029658   28938 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 18:17:52.029719   28938 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 18:17:52.029778   28938 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 18:17:52.029810   28938 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 18:17:52.029861   28938 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:17:52.241059   28938 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:17:52.411104   28938 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:17:52.567580   28938 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:17:52.672342   28938 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:17:53.027839   28938 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:17:53.028152   28938 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:17:53.030225   28938 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:17:53.032087   28938 out.go:252]   - Booting up control plane ...
	I1009 18:17:53.032195   28938 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:17:53.032308   28938 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:17:53.032413   28938 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:17:53.045447   28938 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:17:53.045584   28938 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:17:53.053044   28938 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:17:53.053317   28938 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:17:53.053374   28938 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:17:53.152621   28938 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:17:53.152788   28938 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:17:53.654118   28938 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.620488ms
	I1009 18:17:53.657149   28938 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:17:53.657291   28938 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1009 18:17:53.657401   28938 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:17:53.657467   28938 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:21:53.658175   28938 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000383935s
	I1009 18:21:53.658534   28938 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000621834s
	I1009 18:21:53.658751   28938 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00069997s
	I1009 18:21:53.658772   28938 kubeadm.go:318] 
	I1009 18:21:53.658954   28938 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:21:53.659082   28938 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:21:53.659218   28938 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:21:53.659345   28938 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:21:53.659450   28938 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:21:53.659558   28938 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:21:53.659562   28938 kubeadm.go:318] 
	I1009 18:21:53.662568   28938 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:21:53.662664   28938 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:21:53.663294   28938 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1009 18:21:53.663357   28938 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:21:53.663412   28938 kubeadm.go:402] duration metric: took 8m7.571151202s to StartCluster
	I1009 18:21:53.663446   28938 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:21:53.663490   28938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:21:53.690650   28938 cri.go:89] found id: ""
	I1009 18:21:53.690672   28938 logs.go:282] 0 containers: []
	W1009 18:21:53.690677   28938 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:21:53.690683   28938 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:21:53.690728   28938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:21:53.715408   28938 cri.go:89] found id: ""
	I1009 18:21:53.715421   28938 logs.go:282] 0 containers: []
	W1009 18:21:53.715427   28938 logs.go:284] No container was found matching "etcd"
	I1009 18:21:53.715431   28938 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:21:53.715491   28938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:21:53.741341   28938 cri.go:89] found id: ""
	I1009 18:21:53.741357   28938 logs.go:282] 0 containers: []
	W1009 18:21:53.741365   28938 logs.go:284] No container was found matching "coredns"
	I1009 18:21:53.741371   28938 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:21:53.741422   28938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:21:53.766351   28938 cri.go:89] found id: ""
	I1009 18:21:53.766369   28938 logs.go:282] 0 containers: []
	W1009 18:21:53.766379   28938 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:21:53.766384   28938 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:21:53.766432   28938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:21:53.790930   28938 cri.go:89] found id: ""
	I1009 18:21:53.790945   28938 logs.go:282] 0 containers: []
	W1009 18:21:53.790953   28938 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:21:53.790959   28938 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:21:53.791016   28938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:21:53.816128   28938 cri.go:89] found id: ""
	I1009 18:21:53.816161   28938 logs.go:282] 0 containers: []
	W1009 18:21:53.816169   28938 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:21:53.816177   28938 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:21:53.816235   28938 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:21:53.840580   28938 cri.go:89] found id: ""
	I1009 18:21:53.840598   28938 logs.go:282] 0 containers: []
	W1009 18:21:53.840607   28938 logs.go:284] No container was found matching "kindnet"
	I1009 18:21:53.840618   28938 logs.go:123] Gathering logs for kubelet ...
	I1009 18:21:53.840628   28938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:21:53.908088   28938 logs.go:123] Gathering logs for dmesg ...
	I1009 18:21:53.908108   28938 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:21:53.919980   28938 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:21:53.919999   28938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:21:53.975634   28938 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:21:53.968605    2394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:21:53.969171    2394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:21:53.970750    2394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:21:53.971199    2394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:21:53.972742    2394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:21:53.968605    2394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:21:53.969171    2394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:21:53.970750    2394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:21:53.971199    2394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:21:53.972742    2394 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:21:53.975662   28938 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:21:53.975671   28938 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:21:54.036106   28938 logs.go:123] Gathering logs for container status ...
	I1009 18:21:54.036126   28938 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 18:21:54.066284   28938 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.620488ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000383935s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000621834s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00069997s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 18:21:54.066363   28938 out.go:285] * 
	W1009 18:21:54.066427   28938 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.620488ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000383935s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000621834s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00069997s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:21:54.066439   28938 out.go:285] * 
	W1009 18:21:54.068047   28938 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:21:54.071950   28938 out.go:203] 
	W1009 18:21:54.073724   28938 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.620488ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000383935s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000621834s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00069997s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:21:54.073743   28938 out.go:285] * 
	I1009 18:21:54.076212   28938 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 18:21:51 functional-753440 crio[791]: time="2025-10-09T18:21:51.565657271Z" level=info msg="createCtr: removing container 7d961ab44aa146c484d1ab9f80de638d6c20a7e65f2b16c798509306b90813aa" id=57646c5d-3173-4055-a635-777db49e4c80 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:21:51 functional-753440 crio[791]: time="2025-10-09T18:21:51.565690679Z" level=info msg="createCtr: deleting container 7d961ab44aa146c484d1ab9f80de638d6c20a7e65f2b16c798509306b90813aa from storage" id=57646c5d-3173-4055-a635-777db49e4c80 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:21:51 functional-753440 crio[791]: time="2025-10-09T18:21:51.567987577Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-753440_kube-system_c3332277da3037b9d30e61510b9fdccb_0" id=57646c5d-3173-4055-a635-777db49e4c80 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:21:54 functional-753440 crio[791]: time="2025-10-09T18:21:54.543443138Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=feabb5b3-6714-4826-929c-84989dadad01 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:21:54 functional-753440 crio[791]: time="2025-10-09T18:21:54.543458834Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=fe75f241-9a47-420a-8082-20c215d4fba3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:21:54 functional-753440 crio[791]: time="2025-10-09T18:21:54.544491844Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=bf09078b-e53a-432c-8900-e3e290a48228 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:21:54 functional-753440 crio[791]: time="2025-10-09T18:21:54.544715679Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=bd5f5874-47be-4c18-aa67-7835dd057ccb name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:21:54 functional-753440 crio[791]: time="2025-10-09T18:21:54.54569459Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-753440/kube-controller-manager" id=d4fa2148-4a33-42b2-af6e-dd8f53e0e622 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:21:54 functional-753440 crio[791]: time="2025-10-09T18:21:54.545747278Z" level=info msg="Creating container: kube-system/etcd-functional-753440/etcd" id=79adb87b-773d-43cf-a609-f20a66530613 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:21:54 functional-753440 crio[791]: time="2025-10-09T18:21:54.545948903Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:21:54 functional-753440 crio[791]: time="2025-10-09T18:21:54.546020392Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:21:54 functional-753440 crio[791]: time="2025-10-09T18:21:54.552403432Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:21:54 functional-753440 crio[791]: time="2025-10-09T18:21:54.553287692Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:21:54 functional-753440 crio[791]: time="2025-10-09T18:21:54.554584261Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:21:54 functional-753440 crio[791]: time="2025-10-09T18:21:54.555196342Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:21:54 functional-753440 crio[791]: time="2025-10-09T18:21:54.569981965Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=d4fa2148-4a33-42b2-af6e-dd8f53e0e622 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:21:54 functional-753440 crio[791]: time="2025-10-09T18:21:54.571779628Z" level=info msg="createCtr: deleting container ID 30d7d19d20b7de7ba76055fcd2cd940e2f439ba0d05ed5161d159f6ad3771964 from idIndex" id=d4fa2148-4a33-42b2-af6e-dd8f53e0e622 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:21:54 functional-753440 crio[791]: time="2025-10-09T18:21:54.57181685Z" level=info msg="createCtr: removing container 30d7d19d20b7de7ba76055fcd2cd940e2f439ba0d05ed5161d159f6ad3771964" id=d4fa2148-4a33-42b2-af6e-dd8f53e0e622 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:21:54 functional-753440 crio[791]: time="2025-10-09T18:21:54.571850215Z" level=info msg="createCtr: deleting container 30d7d19d20b7de7ba76055fcd2cd940e2f439ba0d05ed5161d159f6ad3771964 from storage" id=d4fa2148-4a33-42b2-af6e-dd8f53e0e622 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:21:54 functional-753440 crio[791]: time="2025-10-09T18:21:54.571966528Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=79adb87b-773d-43cf-a609-f20a66530613 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:21:54 functional-753440 crio[791]: time="2025-10-09T18:21:54.573686223Z" level=info msg="createCtr: deleting container ID ddb049050b753c46469ecba444b2365c85a039b08f8987b5de680758e4d52414 from idIndex" id=79adb87b-773d-43cf-a609-f20a66530613 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:21:54 functional-753440 crio[791]: time="2025-10-09T18:21:54.573728113Z" level=info msg="createCtr: removing container ddb049050b753c46469ecba444b2365c85a039b08f8987b5de680758e4d52414" id=79adb87b-773d-43cf-a609-f20a66530613 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:21:54 functional-753440 crio[791]: time="2025-10-09T18:21:54.573764723Z" level=info msg="createCtr: deleting container ddb049050b753c46469ecba444b2365c85a039b08f8987b5de680758e4d52414 from storage" id=79adb87b-773d-43cf-a609-f20a66530613 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:21:54 functional-753440 crio[791]: time="2025-10-09T18:21:54.575790184Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-753440_kube-system_ddd5b817e547272bbbe5e6f0c16b8e98_0" id=d4fa2148-4a33-42b2-af6e-dd8f53e0e622 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:21:54 functional-753440 crio[791]: time="2025-10-09T18:21:54.576239084Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-753440_kube-system_894f77eb6f96f2cc2bf4bdca611e7cdb_0" id=79adb87b-773d-43cf-a609-f20a66530613 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:21:54.971931    2560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:21:54.972528    2560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:21:54.974097    2560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:21:54.974580    2560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:21:54.976065    2560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:21:55 up  1:04,  0 user,  load average: 0.14, 0.20, 0.14
	Linux functional-753440 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 18:21:51 functional-753440 kubelet[1796]: E1009 18:21:51.542487    1796 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753440\" not found" node="functional-753440"
	Oct 09 18:21:51 functional-753440 kubelet[1796]: E1009 18:21:51.568344    1796 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:21:51 functional-753440 kubelet[1796]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:21:51 functional-753440 kubelet[1796]:  > podSandboxID="a1601c351acb2109bc843118525e18f9874347bc3c77d062c9da98c9f01ca0c9"
	Oct 09 18:21:51 functional-753440 kubelet[1796]: E1009 18:21:51.568441    1796 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:21:51 functional-753440 kubelet[1796]:         container kube-scheduler start failed in pod kube-scheduler-functional-753440_kube-system(c3332277da3037b9d30e61510b9fdccb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:21:51 functional-753440 kubelet[1796]:  > logger="UnhandledError"
	Oct 09 18:21:51 functional-753440 kubelet[1796]: E1009 18:21:51.568469    1796 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-753440" podUID="c3332277da3037b9d30e61510b9fdccb"
	Oct 09 18:21:53 functional-753440 kubelet[1796]: E1009 18:21:53.557388    1796 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-753440\" not found"
	Oct 09 18:21:54 functional-753440 kubelet[1796]: E1009 18:21:54.542920    1796 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753440\" not found" node="functional-753440"
	Oct 09 18:21:54 functional-753440 kubelet[1796]: E1009 18:21:54.543101    1796 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753440\" not found" node="functional-753440"
	Oct 09 18:21:54 functional-753440 kubelet[1796]: E1009 18:21:54.576108    1796 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:21:54 functional-753440 kubelet[1796]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:21:54 functional-753440 kubelet[1796]:  > podSandboxID="a0f669ac9226ee4ac7b841aacfe05ece4235d10b02fe7bb351eab32cadb9e24d"
	Oct 09 18:21:54 functional-753440 kubelet[1796]: E1009 18:21:54.576277    1796 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:21:54 functional-753440 kubelet[1796]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-753440_kube-system(ddd5b817e547272bbbe5e6f0c16b8e98): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:21:54 functional-753440 kubelet[1796]:  > logger="UnhandledError"
	Oct 09 18:21:54 functional-753440 kubelet[1796]: E1009 18:21:54.576323    1796 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-753440" podUID="ddd5b817e547272bbbe5e6f0c16b8e98"
	Oct 09 18:21:54 functional-753440 kubelet[1796]: E1009 18:21:54.576502    1796 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:21:54 functional-753440 kubelet[1796]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:21:54 functional-753440 kubelet[1796]:  > podSandboxID="b2bb9a720dde4343bb6d68e21981701423cf9ba8fc536a4b16c3a5d7282c9e5b"
	Oct 09 18:21:54 functional-753440 kubelet[1796]: E1009 18:21:54.576592    1796 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:21:54 functional-753440 kubelet[1796]:         container etcd start failed in pod etcd-functional-753440_kube-system(894f77eb6f96f2cc2bf4bdca611e7cdb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:21:54 functional-753440 kubelet[1796]:  > logger="UnhandledError"
	Oct 09 18:21:54 functional-753440 kubelet[1796]: E1009 18:21:54.577777    1796 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-753440" podUID="894f77eb6f96f2cc2bf4bdca611e7cdb"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753440 -n functional-753440
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753440 -n functional-753440: exit status 6 (291.768213ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:21:55.352749   34675 status.go:458] kubeconfig endpoint: get endpoint: "functional-753440" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-753440" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (501.89s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (366.58s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1009 18:21:55.366795   14880 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-753440 --alsologtostderr -v=8
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-753440 --alsologtostderr -v=8: exit status 80 (6m3.889120814s)

                                                
                                                
-- stdout --
	* [functional-753440] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-753440" primary control-plane node in "functional-753440" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:21:55.407242   34792 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:21:55.407482   34792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:21:55.407490   34792 out.go:374] Setting ErrFile to fd 2...
	I1009 18:21:55.407494   34792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:21:55.407669   34792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:21:55.408109   34792 out.go:368] Setting JSON to false
	I1009 18:21:55.408948   34792 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3863,"bootTime":1760030252,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:21:55.409029   34792 start.go:141] virtualization: kvm guest
	I1009 18:21:55.411208   34792 out.go:179] * [functional-753440] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:21:55.412706   34792 notify.go:220] Checking for updates...
	I1009 18:21:55.412728   34792 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:21:55.414107   34792 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:21:55.415609   34792 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:21:55.417005   34792 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:21:55.418411   34792 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:21:55.419884   34792 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:21:55.421538   34792 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:21:55.421658   34792 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:21:55.445068   34792 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:21:55.445204   34792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:21:55.504624   34792 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:21:55.494450296 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:21:55.504746   34792 docker.go:318] overlay module found
	I1009 18:21:55.507261   34792 out.go:179] * Using the docker driver based on existing profile
	I1009 18:21:55.508504   34792 start.go:305] selected driver: docker
	I1009 18:21:55.508518   34792 start.go:925] validating driver "docker" against &{Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:21:55.508594   34792 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:21:55.508665   34792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:21:55.566793   34792 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:21:55.557358643 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:21:55.567631   34792 cni.go:84] Creating CNI manager for ""
	I1009 18:21:55.567714   34792 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:21:55.567780   34792 start.go:349] cluster config:
	{Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:21:55.569913   34792 out.go:179] * Starting "functional-753440" primary control-plane node in "functional-753440" cluster
	I1009 18:21:55.571250   34792 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:21:55.572672   34792 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:21:55.573890   34792 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:21:55.573921   34792 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:21:55.573933   34792 cache.go:64] Caching tarball of preloaded images
	I1009 18:21:55.573992   34792 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:21:55.574016   34792 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:21:55.574025   34792 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:21:55.574109   34792 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/config.json ...
	I1009 18:21:55.593603   34792 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 18:21:55.593631   34792 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 18:21:55.593646   34792 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:21:55.593672   34792 start.go:360] acquireMachinesLock for functional-753440: {Name:mka6dd10318522f9d68a16550e4b04812fa22004 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:21:55.593732   34792 start.go:364] duration metric: took 38.489µs to acquireMachinesLock for "functional-753440"
	I1009 18:21:55.593749   34792 start.go:96] Skipping create...Using existing machine configuration
	I1009 18:21:55.593758   34792 fix.go:54] fixHost starting: 
	I1009 18:21:55.593970   34792 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
	I1009 18:21:55.610925   34792 fix.go:112] recreateIfNeeded on functional-753440: state=Running err=<nil>
	W1009 18:21:55.610951   34792 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 18:21:55.612681   34792 out.go:252] * Updating the running docker "functional-753440" container ...
	I1009 18:21:55.612704   34792 machine.go:93] provisionDockerMachine start ...
	I1009 18:21:55.612764   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:55.630174   34792 main.go:141] libmachine: Using SSH client type: native
	I1009 18:21:55.630389   34792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:21:55.630401   34792 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:21:55.773949   34792 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753440
	
	I1009 18:21:55.773975   34792 ubuntu.go:182] provisioning hostname "functional-753440"
	I1009 18:21:55.774031   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:55.792726   34792 main.go:141] libmachine: Using SSH client type: native
	I1009 18:21:55.792949   34792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:21:55.792962   34792 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-753440 && echo "functional-753440" | sudo tee /etc/hostname
	I1009 18:21:55.945969   34792 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753440
	
	I1009 18:21:55.946040   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:55.963600   34792 main.go:141] libmachine: Using SSH client type: native
	I1009 18:21:55.963821   34792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:21:55.963839   34792 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753440' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753440/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753440' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:21:56.108677   34792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:21:56.108700   34792 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 18:21:56.108717   34792 ubuntu.go:190] setting up certificates
	I1009 18:21:56.108727   34792 provision.go:84] configureAuth start
	I1009 18:21:56.108783   34792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753440
	I1009 18:21:56.127107   34792 provision.go:143] copyHostCerts
	I1009 18:21:56.127166   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:21:56.127197   34792 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 18:21:56.127212   34792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:21:56.127290   34792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 18:21:56.127394   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:21:56.127416   34792 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 18:21:56.127420   34792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:21:56.127449   34792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 18:21:56.127507   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:21:56.127523   34792 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 18:21:56.127526   34792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:21:56.127549   34792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 18:21:56.127598   34792 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.functional-753440 san=[127.0.0.1 192.168.49.2 functional-753440 localhost minikube]
	I1009 18:21:56.380428   34792 provision.go:177] copyRemoteCerts
	I1009 18:21:56.380482   34792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:21:56.380515   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:56.398054   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:56.500395   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 18:21:56.500448   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 18:21:56.517603   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 18:21:56.517655   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 18:21:56.534349   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 18:21:56.534397   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 18:21:56.551305   34792 provision.go:87] duration metric: took 442.551304ms to configureAuth
	I1009 18:21:56.551330   34792 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:21:56.551498   34792 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:21:56.551579   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:56.568651   34792 main.go:141] libmachine: Using SSH client type: native
	I1009 18:21:56.568866   34792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:21:56.568881   34792 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:21:56.838390   34792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:21:56.838414   34792 machine.go:96] duration metric: took 1.225703269s to provisionDockerMachine
	I1009 18:21:56.838426   34792 start.go:293] postStartSetup for "functional-753440" (driver="docker")
	I1009 18:21:56.838437   34792 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:21:56.838510   34792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:21:56.838559   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:56.856450   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:56.959658   34792 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:21:56.963119   34792 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1009 18:21:56.963150   34792 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1009 18:21:56.963158   34792 command_runner.go:130] > VERSION_ID="12"
	I1009 18:21:56.963165   34792 command_runner.go:130] > VERSION="12 (bookworm)"
	I1009 18:21:56.963174   34792 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1009 18:21:56.963179   34792 command_runner.go:130] > ID=debian
	I1009 18:21:56.963186   34792 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1009 18:21:56.963194   34792 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1009 18:21:56.963212   34792 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1009 18:21:56.963315   34792 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:21:56.963334   34792 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:21:56.963342   34792 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 18:21:56.963382   34792 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 18:21:56.963448   34792 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 18:21:56.963463   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /etc/ssl/certs/148802.pem
	I1009 18:21:56.963529   34792 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/test/nested/copy/14880/hosts -> hosts in /etc/test/nested/copy/14880
	I1009 18:21:56.963535   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/test/nested/copy/14880/hosts -> /etc/test/nested/copy/14880/hosts
	I1009 18:21:56.963565   34792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/14880
	I1009 18:21:56.970888   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:21:56.988730   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/test/nested/copy/14880/hosts --> /etc/test/nested/copy/14880/hosts (40 bytes)
	I1009 18:21:57.005907   34792 start.go:296] duration metric: took 167.469505ms for postStartSetup
	I1009 18:21:57.005971   34792 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:21:57.006025   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:57.023806   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:57.123166   34792 command_runner.go:130] > 39%
	I1009 18:21:57.123235   34792 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:21:57.127917   34792 command_runner.go:130] > 179G
	I1009 18:21:57.127948   34792 fix.go:56] duration metric: took 1.534189396s for fixHost
	I1009 18:21:57.127960   34792 start.go:83] releasing machines lock for "functional-753440", held for 1.534218366s
	I1009 18:21:57.128034   34792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753440
	I1009 18:21:57.145978   34792 ssh_runner.go:195] Run: cat /version.json
	I1009 18:21:57.146019   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:57.146063   34792 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:21:57.146159   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:57.164302   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:57.164547   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:57.263542   34792 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759745255-21703", "minikube_version": "v1.37.0", "commit": "a51fe4b7ffc88febd8814e8831f38772e976d097"}
	I1009 18:21:57.263690   34792 ssh_runner.go:195] Run: systemctl --version
	I1009 18:21:57.316955   34792 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1009 18:21:57.317002   34792 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1009 18:21:57.317022   34792 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1009 18:21:57.317074   34792 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:21:57.353021   34792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 18:21:57.357737   34792 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1009 18:21:57.357788   34792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:21:57.357834   34792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:21:57.365811   34792 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 18:21:57.365833   34792 start.go:495] detecting cgroup driver to use...
	I1009 18:21:57.365861   34792 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:21:57.365903   34792 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:21:57.380237   34792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:21:57.392796   34792 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:21:57.392859   34792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:21:57.407315   34792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:21:57.419892   34792 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:21:57.506572   34792 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:21:57.589596   34792 docker.go:234] disabling docker service ...
	I1009 18:21:57.589673   34792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:21:57.603725   34792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:21:57.615780   34792 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:21:57.696218   34792 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:21:57.781915   34792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:21:57.794534   34792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:21:57.808497   34792 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1009 18:21:57.808534   34792 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:21:57.808589   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.817764   34792 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 18:21:57.817814   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.827115   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.836066   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.844563   34792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:21:57.852458   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.861227   34792 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.869900   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.878917   34792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:21:57.886570   34792 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1009 18:21:57.886644   34792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:21:57.894517   34792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:21:57.979064   34792 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:21:58.090717   34792 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:21:58.090783   34792 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:21:58.095044   34792 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1009 18:21:58.095068   34792 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1009 18:21:58.095074   34792 command_runner.go:130] > Device: 0,59	Inode: 3803        Links: 1
	I1009 18:21:58.095080   34792 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 18:21:58.095085   34792 command_runner.go:130] > Access: 2025-10-09 18:21:58.072690390 +0000
	I1009 18:21:58.095093   34792 command_runner.go:130] > Modify: 2025-10-09 18:21:58.072690390 +0000
	I1009 18:21:58.095101   34792 command_runner.go:130] > Change: 2025-10-09 18:21:58.072690390 +0000
	I1009 18:21:58.095108   34792 command_runner.go:130] >  Birth: 2025-10-09 18:21:58.072690390 +0000
	I1009 18:21:58.095130   34792 start.go:563] Will wait 60s for crictl version
	I1009 18:21:58.095214   34792 ssh_runner.go:195] Run: which crictl
	I1009 18:21:58.099101   34792 command_runner.go:130] > /usr/local/bin/crictl
	I1009 18:21:58.099187   34792 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:21:58.122816   34792 command_runner.go:130] > Version:  0.1.0
	I1009 18:21:58.122840   34792 command_runner.go:130] > RuntimeName:  cri-o
	I1009 18:21:58.122845   34792 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1009 18:21:58.122850   34792 command_runner.go:130] > RuntimeApiVersion:  v1
	I1009 18:21:58.122867   34792 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:21:58.122920   34792 ssh_runner.go:195] Run: crio --version
	I1009 18:21:58.149899   34792 command_runner.go:130] > crio version 1.34.1
	I1009 18:21:58.149922   34792 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1009 18:21:58.149928   34792 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1009 18:21:58.149933   34792 command_runner.go:130] >    GitTreeState:   dirty
	I1009 18:21:58.149944   34792 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1009 18:21:58.149949   34792 command_runner.go:130] >    GoVersion:      go1.24.6
	I1009 18:21:58.149952   34792 command_runner.go:130] >    Compiler:       gc
	I1009 18:21:58.149957   34792 command_runner.go:130] >    Platform:       linux/amd64
	I1009 18:21:58.149961   34792 command_runner.go:130] >    Linkmode:       static
	I1009 18:21:58.149964   34792 command_runner.go:130] >    BuildTags:
	I1009 18:21:58.149967   34792 command_runner.go:130] >      static
	I1009 18:21:58.149971   34792 command_runner.go:130] >      netgo
	I1009 18:21:58.149975   34792 command_runner.go:130] >      osusergo
	I1009 18:21:58.149978   34792 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1009 18:21:58.149982   34792 command_runner.go:130] >      seccomp
	I1009 18:21:58.149988   34792 command_runner.go:130] >      apparmor
	I1009 18:21:58.149991   34792 command_runner.go:130] >      selinux
	I1009 18:21:58.149998   34792 command_runner.go:130] >    LDFlags:          unknown
	I1009 18:21:58.150002   34792 command_runner.go:130] >    SeccompEnabled:   true
	I1009 18:21:58.150007   34792 command_runner.go:130] >    AppArmorEnabled:  false
	I1009 18:21:58.151351   34792 ssh_runner.go:195] Run: crio --version
	I1009 18:21:58.178662   34792 command_runner.go:130] > crio version 1.34.1
	I1009 18:21:58.178683   34792 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1009 18:21:58.178689   34792 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1009 18:21:58.178693   34792 command_runner.go:130] >    GitTreeState:   dirty
	I1009 18:21:58.178698   34792 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1009 18:21:58.178702   34792 command_runner.go:130] >    GoVersion:      go1.24.6
	I1009 18:21:58.178706   34792 command_runner.go:130] >    Compiler:       gc
	I1009 18:21:58.178714   34792 command_runner.go:130] >    Platform:       linux/amd64
	I1009 18:21:58.178718   34792 command_runner.go:130] >    Linkmode:       static
	I1009 18:21:58.178721   34792 command_runner.go:130] >    BuildTags:
	I1009 18:21:58.178724   34792 command_runner.go:130] >      static
	I1009 18:21:58.178728   34792 command_runner.go:130] >      netgo
	I1009 18:21:58.178732   34792 command_runner.go:130] >      osusergo
	I1009 18:21:58.178735   34792 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1009 18:21:58.178739   34792 command_runner.go:130] >      seccomp
	I1009 18:21:58.178742   34792 command_runner.go:130] >      apparmor
	I1009 18:21:58.178757   34792 command_runner.go:130] >      selinux
	I1009 18:21:58.178764   34792 command_runner.go:130] >    LDFlags:          unknown
	I1009 18:21:58.178768   34792 command_runner.go:130] >    SeccompEnabled:   true
	I1009 18:21:58.178771   34792 command_runner.go:130] >    AppArmorEnabled:  false
	I1009 18:21:58.181232   34792 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:21:58.182844   34792 cli_runner.go:164] Run: docker network inspect functional-753440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:21:58.200852   34792 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:21:58.205024   34792 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1009 18:21:58.205096   34792 kubeadm.go:883] updating cluster {Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:21:58.205232   34792 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:21:58.205276   34792 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:21:58.234303   34792 command_runner.go:130] > {
	I1009 18:21:58.234338   34792 command_runner.go:130] >   "images":  [
	I1009 18:21:58.234345   34792 command_runner.go:130] >     {
	I1009 18:21:58.234355   34792 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1009 18:21:58.234362   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.234369   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1009 18:21:58.234373   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234378   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.234388   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1009 18:21:58.234400   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1009 18:21:58.234409   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234417   34792 command_runner.go:130] >       "size":  "109379124",
	I1009 18:21:58.234426   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.234435   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.234443   34792 command_runner.go:130] >     },
	I1009 18:21:58.234449   34792 command_runner.go:130] >     {
	I1009 18:21:58.234460   34792 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1009 18:21:58.234468   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.234478   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1009 18:21:58.234486   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234494   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.234509   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1009 18:21:58.234523   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1009 18:21:58.234532   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234539   34792 command_runner.go:130] >       "size":  "31470524",
	I1009 18:21:58.234548   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.234565   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.234581   34792 command_runner.go:130] >     },
	I1009 18:21:58.234590   34792 command_runner.go:130] >     {
	I1009 18:21:58.234600   34792 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1009 18:21:58.234610   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.234619   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1009 18:21:58.234627   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234635   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.234649   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1009 18:21:58.234665   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1009 18:21:58.234673   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234680   34792 command_runner.go:130] >       "size":  "76103547",
	I1009 18:21:58.234689   34792 command_runner.go:130] >       "username":  "nonroot",
	I1009 18:21:58.234697   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.234713   34792 command_runner.go:130] >     },
	I1009 18:21:58.234721   34792 command_runner.go:130] >     {
	I1009 18:21:58.234731   34792 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1009 18:21:58.234740   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.234749   34792 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1009 18:21:58.234757   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234765   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.234780   34792 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1009 18:21:58.234794   34792 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1009 18:21:58.234802   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234809   34792 command_runner.go:130] >       "size":  "195976448",
	I1009 18:21:58.234817   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.234824   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.234833   34792 command_runner.go:130] >       },
	I1009 18:21:58.234849   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.234858   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.234864   34792 command_runner.go:130] >     },
	I1009 18:21:58.234871   34792 command_runner.go:130] >     {
	I1009 18:21:58.234882   34792 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1009 18:21:58.234891   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.234906   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1009 18:21:58.234914   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234921   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.234936   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1009 18:21:58.234952   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1009 18:21:58.234960   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234967   34792 command_runner.go:130] >       "size":  "89046001",
	I1009 18:21:58.234976   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.234984   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.234991   34792 command_runner.go:130] >       },
	I1009 18:21:58.234999   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.235008   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.235015   34792 command_runner.go:130] >     },
	I1009 18:21:58.235023   34792 command_runner.go:130] >     {
	I1009 18:21:58.235033   34792 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1009 18:21:58.235042   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.235052   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1009 18:21:58.235059   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235065   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.235078   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1009 18:21:58.235098   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1009 18:21:58.235106   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235113   34792 command_runner.go:130] >       "size":  "76004181",
	I1009 18:21:58.235122   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.235130   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.235152   34792 command_runner.go:130] >       },
	I1009 18:21:58.235159   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.235168   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.235174   34792 command_runner.go:130] >     },
	I1009 18:21:58.235183   34792 command_runner.go:130] >     {
	I1009 18:21:58.235193   34792 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1009 18:21:58.235202   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.235211   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1009 18:21:58.235227   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235236   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.235248   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1009 18:21:58.235262   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1009 18:21:58.235271   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235278   34792 command_runner.go:130] >       "size":  "73138073",
	I1009 18:21:58.235286   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.235294   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.235302   34792 command_runner.go:130] >     },
	I1009 18:21:58.235314   34792 command_runner.go:130] >     {
	I1009 18:21:58.235326   34792 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1009 18:21:58.235333   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.235344   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1009 18:21:58.235352   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235359   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.235373   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1009 18:21:58.235408   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1009 18:21:58.235416   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235424   34792 command_runner.go:130] >       "size":  "53844823",
	I1009 18:21:58.235433   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.235441   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.235450   34792 command_runner.go:130] >       },
	I1009 18:21:58.235456   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.235464   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.235470   34792 command_runner.go:130] >     },
	I1009 18:21:58.235477   34792 command_runner.go:130] >     {
	I1009 18:21:58.235488   34792 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1009 18:21:58.235496   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.235508   34792 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1009 18:21:58.235515   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235522   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.235536   34792 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1009 18:21:58.235550   34792 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1009 18:21:58.235566   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235576   34792 command_runner.go:130] >       "size":  "742092",
	I1009 18:21:58.235582   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.235592   34792 command_runner.go:130] >         "value":  "65535"
	I1009 18:21:58.235599   34792 command_runner.go:130] >       },
	I1009 18:21:58.235606   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.235615   34792 command_runner.go:130] >       "pinned":  true
	I1009 18:21:58.235621   34792 command_runner.go:130] >     }
	I1009 18:21:58.235627   34792 command_runner.go:130] >   ]
	I1009 18:21:58.235633   34792 command_runner.go:130] > }
	I1009 18:21:58.236008   34792 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:21:58.236027   34792 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:21:58.236090   34792 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:21:58.260405   34792 command_runner.go:130] > {
	I1009 18:21:58.260434   34792 command_runner.go:130] >   "images":  [
	I1009 18:21:58.260440   34792 command_runner.go:130] >     {
	I1009 18:21:58.260454   34792 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1009 18:21:58.260464   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.260473   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1009 18:21:58.260483   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260490   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.260505   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1009 18:21:58.260520   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1009 18:21:58.260529   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260540   34792 command_runner.go:130] >       "size":  "109379124",
	I1009 18:21:58.260550   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.260560   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.260566   34792 command_runner.go:130] >     },
	I1009 18:21:58.260575   34792 command_runner.go:130] >     {
	I1009 18:21:58.260586   34792 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1009 18:21:58.260593   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.260606   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1009 18:21:58.260615   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260624   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.260639   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1009 18:21:58.260653   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1009 18:21:58.260661   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260667   34792 command_runner.go:130] >       "size":  "31470524",
	I1009 18:21:58.260674   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.260681   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.260689   34792 command_runner.go:130] >     },
	I1009 18:21:58.260698   34792 command_runner.go:130] >     {
	I1009 18:21:58.260711   34792 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1009 18:21:58.260721   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.260732   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1009 18:21:58.260740   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260746   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.260759   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1009 18:21:58.260769   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1009 18:21:58.260777   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260785   34792 command_runner.go:130] >       "size":  "76103547",
	I1009 18:21:58.260794   34792 command_runner.go:130] >       "username":  "nonroot",
	I1009 18:21:58.260804   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.260812   34792 command_runner.go:130] >     },
	I1009 18:21:58.260817   34792 command_runner.go:130] >     {
	I1009 18:21:58.260829   34792 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1009 18:21:58.260838   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.260848   34792 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1009 18:21:58.260854   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260861   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.260876   34792 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1009 18:21:58.260890   34792 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1009 18:21:58.260897   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260904   34792 command_runner.go:130] >       "size":  "195976448",
	I1009 18:21:58.260914   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.260923   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.260931   34792 command_runner.go:130] >       },
	I1009 18:21:58.260939   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.260949   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.260957   34792 command_runner.go:130] >     },
	I1009 18:21:58.260965   34792 command_runner.go:130] >     {
	I1009 18:21:58.260974   34792 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1009 18:21:58.260984   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.260992   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1009 18:21:58.261000   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261007   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.261018   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1009 18:21:58.261032   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1009 18:21:58.261040   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261047   34792 command_runner.go:130] >       "size":  "89046001",
	I1009 18:21:58.261056   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.261066   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.261073   34792 command_runner.go:130] >       },
	I1009 18:21:58.261083   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.261093   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.261101   34792 command_runner.go:130] >     },
	I1009 18:21:58.261107   34792 command_runner.go:130] >     {
	I1009 18:21:58.261119   34792 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1009 18:21:58.261128   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.261153   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1009 18:21:58.261159   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261169   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.261181   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1009 18:21:58.261196   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1009 18:21:58.261205   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261214   34792 command_runner.go:130] >       "size":  "76004181",
	I1009 18:21:58.261223   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.261234   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.261243   34792 command_runner.go:130] >       },
	I1009 18:21:58.261249   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.261258   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.261266   34792 command_runner.go:130] >     },
	I1009 18:21:58.261270   34792 command_runner.go:130] >     {
	I1009 18:21:58.261283   34792 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1009 18:21:58.261295   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.261306   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1009 18:21:58.261314   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261321   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.261334   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1009 18:21:58.261349   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1009 18:21:58.261356   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261364   34792 command_runner.go:130] >       "size":  "73138073",
	I1009 18:21:58.261372   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.261379   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.261384   34792 command_runner.go:130] >     },
	I1009 18:21:58.261393   34792 command_runner.go:130] >     {
	I1009 18:21:58.261402   34792 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1009 18:21:58.261409   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.261417   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1009 18:21:58.261422   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261428   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.261439   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1009 18:21:58.261460   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1009 18:21:58.261467   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261473   34792 command_runner.go:130] >       "size":  "53844823",
	I1009 18:21:58.261482   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.261491   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.261498   34792 command_runner.go:130] >       },
	I1009 18:21:58.261507   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.261516   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.261525   34792 command_runner.go:130] >     },
	I1009 18:21:58.261533   34792 command_runner.go:130] >     {
	I1009 18:21:58.261543   34792 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1009 18:21:58.261549   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.261555   34792 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1009 18:21:58.261563   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261570   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.261584   34792 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1009 18:21:58.261597   34792 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1009 18:21:58.261607   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261614   34792 command_runner.go:130] >       "size":  "742092",
	I1009 18:21:58.261620   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.261626   34792 command_runner.go:130] >         "value":  "65535"
	I1009 18:21:58.261632   34792 command_runner.go:130] >       },
	I1009 18:21:58.261636   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.261641   34792 command_runner.go:130] >       "pinned":  true
	I1009 18:21:58.261649   34792 command_runner.go:130] >     }
	I1009 18:21:58.261655   34792 command_runner.go:130] >   ]
	I1009 18:21:58.261663   34792 command_runner.go:130] > }
	I1009 18:21:58.262011   34792 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:21:58.262027   34792 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:21:58.262034   34792 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1009 18:21:58.262124   34792 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-753440 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:21:58.262213   34792 ssh_runner.go:195] Run: crio config
	I1009 18:21:58.302300   34792 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1009 18:21:58.302331   34792 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1009 18:21:58.302340   34792 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1009 18:21:58.302345   34792 command_runner.go:130] > #
	I1009 18:21:58.302356   34792 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1009 18:21:58.302365   34792 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1009 18:21:58.302374   34792 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1009 18:21:58.302388   34792 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1009 18:21:58.302395   34792 command_runner.go:130] > # reload'.
	I1009 18:21:58.302413   34792 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1009 18:21:58.302424   34792 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1009 18:21:58.302434   34792 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1009 18:21:58.302446   34792 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1009 18:21:58.302451   34792 command_runner.go:130] > [crio]
	I1009 18:21:58.302460   34792 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1009 18:21:58.302491   34792 command_runner.go:130] > # containers images, in this directory.
	I1009 18:21:58.302515   34792 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1009 18:21:58.302526   34792 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1009 18:21:58.302534   34792 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1009 18:21:58.302549   34792 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1009 18:21:58.302558   34792 command_runner.go:130] > # imagestore = ""
	I1009 18:21:58.302569   34792 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1009 18:21:58.302588   34792 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1009 18:21:58.302596   34792 command_runner.go:130] > # storage_driver = "overlay"
	I1009 18:21:58.302604   34792 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1009 18:21:58.302618   34792 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1009 18:21:58.302625   34792 command_runner.go:130] > # storage_option = [
	I1009 18:21:58.302630   34792 command_runner.go:130] > # ]
	I1009 18:21:58.302640   34792 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1009 18:21:58.302649   34792 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1009 18:21:58.302660   34792 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1009 18:21:58.302668   34792 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1009 18:21:58.302681   34792 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1009 18:21:58.302689   34792 command_runner.go:130] > # always happen on a node reboot
	I1009 18:21:58.302700   34792 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1009 18:21:58.302714   34792 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1009 18:21:58.302727   34792 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1009 18:21:58.302738   34792 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1009 18:21:58.302745   34792 command_runner.go:130] > # version_file_persist = ""
	I1009 18:21:58.302760   34792 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1009 18:21:58.302779   34792 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1009 18:21:58.302786   34792 command_runner.go:130] > # internal_wipe = true
	I1009 18:21:58.302800   34792 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1009 18:21:58.302809   34792 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1009 18:21:58.302823   34792 command_runner.go:130] > # internal_repair = true
	I1009 18:21:58.302832   34792 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1009 18:21:58.302841   34792 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1009 18:21:58.302850   34792 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1009 18:21:58.302858   34792 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1009 18:21:58.302871   34792 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1009 18:21:58.302877   34792 command_runner.go:130] > [crio.api]
	I1009 18:21:58.302889   34792 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1009 18:21:58.302895   34792 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1009 18:21:58.302903   34792 command_runner.go:130] > # IP address on which the stream server will listen.
	I1009 18:21:58.302908   34792 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1009 18:21:58.302918   34792 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1009 18:21:58.302922   34792 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1009 18:21:58.302928   34792 command_runner.go:130] > # stream_port = "0"
	I1009 18:21:58.302935   34792 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1009 18:21:58.302943   34792 command_runner.go:130] > # stream_enable_tls = false
	I1009 18:21:58.302953   34792 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1009 18:21:58.302963   34792 command_runner.go:130] > # stream_idle_timeout = ""
	I1009 18:21:58.302972   34792 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1009 18:21:58.302984   34792 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1009 18:21:58.303003   34792 command_runner.go:130] > # stream_tls_cert = ""
	I1009 18:21:58.303014   34792 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1009 18:21:58.303019   34792 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1009 18:21:58.303024   34792 command_runner.go:130] > # stream_tls_key = ""
	I1009 18:21:58.303031   34792 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1009 18:21:58.303041   34792 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1009 18:21:58.303054   34792 command_runner.go:130] > # automatically pick up the changes.
	I1009 18:21:58.303061   34792 command_runner.go:130] > # stream_tls_ca = ""
	I1009 18:21:58.303083   34792 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1009 18:21:58.303094   34792 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1009 18:21:58.303103   34792 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1009 18:21:58.303111   34792 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1009 18:21:58.303120   34792 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1009 18:21:58.303130   34792 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1009 18:21:58.303156   34792 command_runner.go:130] > [crio.runtime]
	I1009 18:21:58.303167   34792 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1009 18:21:58.303176   34792 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1009 18:21:58.303182   34792 command_runner.go:130] > # "nofile=1024:2048"
	I1009 18:21:58.303192   34792 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1009 18:21:58.303201   34792 command_runner.go:130] > # default_ulimits = [
	I1009 18:21:58.303207   34792 command_runner.go:130] > # ]
	I1009 18:21:58.303219   34792 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1009 18:21:58.303225   34792 command_runner.go:130] > # no_pivot = false
	I1009 18:21:58.303234   34792 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1009 18:21:58.303261   34792 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1009 18:21:58.303272   34792 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1009 18:21:58.303282   34792 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1009 18:21:58.303294   34792 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1009 18:21:58.303307   34792 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1009 18:21:58.303315   34792 command_runner.go:130] > # conmon = ""
	I1009 18:21:58.303321   34792 command_runner.go:130] > # Cgroup setting for conmon
	I1009 18:21:58.303330   34792 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1009 18:21:58.303336   34792 command_runner.go:130] > conmon_cgroup = "pod"
	I1009 18:21:58.303344   34792 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1009 18:21:58.303351   34792 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1009 18:21:58.303361   34792 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1009 18:21:58.303366   34792 command_runner.go:130] > # conmon_env = [
	I1009 18:21:58.303370   34792 command_runner.go:130] > # ]
	I1009 18:21:58.303377   34792 command_runner.go:130] > # Additional environment variables to set for all the
	I1009 18:21:58.303389   34792 command_runner.go:130] > # containers. These are overridden if set in the
	I1009 18:21:58.303398   34792 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1009 18:21:58.303404   34792 command_runner.go:130] > # default_env = [
	I1009 18:21:58.303408   34792 command_runner.go:130] > # ]
	I1009 18:21:58.303417   34792 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1009 18:21:58.303434   34792 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1009 18:21:58.303443   34792 command_runner.go:130] > # selinux = false
	I1009 18:21:58.303454   34792 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1009 18:21:58.303468   34792 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1009 18:21:58.303479   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.303489   34792 command_runner.go:130] > # seccomp_profile = ""
	I1009 18:21:58.303500   34792 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1009 18:21:58.303513   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.303520   34792 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1009 18:21:58.303530   34792 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1009 18:21:58.303543   34792 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1009 18:21:58.303553   34792 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1009 18:21:58.303567   34792 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1009 18:21:58.303578   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.303586   34792 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1009 18:21:58.303597   34792 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1009 18:21:58.303603   34792 command_runner.go:130] > # the cgroup blockio controller.
	I1009 18:21:58.303610   34792 command_runner.go:130] > # blockio_config_file = ""
	I1009 18:21:58.303625   34792 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1009 18:21:58.303631   34792 command_runner.go:130] > # blockio parameters.
	I1009 18:21:58.303639   34792 command_runner.go:130] > # blockio_reload = false
	I1009 18:21:58.303649   34792 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1009 18:21:58.303659   34792 command_runner.go:130] > # irqbalance daemon.
	I1009 18:21:58.303667   34792 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1009 18:21:58.303718   34792 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1009 18:21:58.303738   34792 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1009 18:21:58.303748   34792 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1009 18:21:58.303756   34792 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1009 18:21:58.303765   34792 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1009 18:21:58.303772   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.303777   34792 command_runner.go:130] > # rdt_config_file = ""
	I1009 18:21:58.303787   34792 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1009 18:21:58.303793   34792 command_runner.go:130] > # cgroup_manager = "systemd"
	I1009 18:21:58.303802   34792 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1009 18:21:58.303809   34792 command_runner.go:130] > # separate_pull_cgroup = ""
	I1009 18:21:58.303817   34792 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1009 18:21:58.303827   34792 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1009 18:21:58.303836   34792 command_runner.go:130] > # will be added.
	I1009 18:21:58.303844   34792 command_runner.go:130] > # default_capabilities = [
	I1009 18:21:58.303853   34792 command_runner.go:130] > # 	"CHOWN",
	I1009 18:21:58.303860   34792 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1009 18:21:58.303868   34792 command_runner.go:130] > # 	"FSETID",
	I1009 18:21:58.303874   34792 command_runner.go:130] > # 	"FOWNER",
	I1009 18:21:58.303883   34792 command_runner.go:130] > # 	"SETGID",
	I1009 18:21:58.303899   34792 command_runner.go:130] > # 	"SETUID",
	I1009 18:21:58.303908   34792 command_runner.go:130] > # 	"SETPCAP",
	I1009 18:21:58.303916   34792 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1009 18:21:58.303925   34792 command_runner.go:130] > # 	"KILL",
	I1009 18:21:58.303931   34792 command_runner.go:130] > # ]
	I1009 18:21:58.303944   34792 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1009 18:21:58.303958   34792 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1009 18:21:58.303969   34792 command_runner.go:130] > # add_inheritable_capabilities = false
	I1009 18:21:58.303982   34792 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1009 18:21:58.304001   34792 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1009 18:21:58.304011   34792 command_runner.go:130] > default_sysctls = [
	I1009 18:21:58.304018   34792 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1009 18:21:58.304025   34792 command_runner.go:130] > ]
	I1009 18:21:58.304033   34792 command_runner.go:130] > # List of devices on the host that a
	I1009 18:21:58.304046   34792 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1009 18:21:58.304055   34792 command_runner.go:130] > # allowed_devices = [
	I1009 18:21:58.304063   34792 command_runner.go:130] > # 	"/dev/fuse",
	I1009 18:21:58.304071   34792 command_runner.go:130] > # 	"/dev/net/tun",
	I1009 18:21:58.304077   34792 command_runner.go:130] > # ]
	I1009 18:21:58.304088   34792 command_runner.go:130] > # List of additional devices. specified as
	I1009 18:21:58.304102   34792 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1009 18:21:58.304113   34792 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1009 18:21:58.304124   34792 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1009 18:21:58.304153   34792 command_runner.go:130] > # additional_devices = [
	I1009 18:21:58.304163   34792 command_runner.go:130] > # ]
	I1009 18:21:58.304172   34792 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1009 18:21:58.304182   34792 command_runner.go:130] > # cdi_spec_dirs = [
	I1009 18:21:58.304188   34792 command_runner.go:130] > # 	"/etc/cdi",
	I1009 18:21:58.304197   34792 command_runner.go:130] > # 	"/var/run/cdi",
	I1009 18:21:58.304202   34792 command_runner.go:130] > # ]
	I1009 18:21:58.304212   34792 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1009 18:21:58.304225   34792 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1009 18:21:58.304234   34792 command_runner.go:130] > # Defaults to false.
	I1009 18:21:58.304243   34792 command_runner.go:130] > # device_ownership_from_security_context = false
	I1009 18:21:58.304257   34792 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1009 18:21:58.304269   34792 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1009 18:21:58.304278   34792 command_runner.go:130] > # hooks_dir = [
	I1009 18:21:58.304287   34792 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1009 18:21:58.304294   34792 command_runner.go:130] > # ]
	I1009 18:21:58.304304   34792 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1009 18:21:58.304317   34792 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1009 18:21:58.304329   34792 command_runner.go:130] > # its default mounts from the following two files:
	I1009 18:21:58.304337   34792 command_runner.go:130] > #
	I1009 18:21:58.304347   34792 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1009 18:21:58.304361   34792 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1009 18:21:58.304382   34792 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1009 18:21:58.304389   34792 command_runner.go:130] > #
	I1009 18:21:58.304399   34792 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1009 18:21:58.304413   34792 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1009 18:21:58.304427   34792 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1009 18:21:58.304438   34792 command_runner.go:130] > #      only add mounts it finds in this file.
	I1009 18:21:58.304447   34792 command_runner.go:130] > #
	I1009 18:21:58.304455   34792 command_runner.go:130] > # default_mounts_file = ""
	I1009 18:21:58.304466   34792 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1009 18:21:58.304479   34792 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1009 18:21:58.304494   34792 command_runner.go:130] > # pids_limit = -1
	I1009 18:21:58.304508   34792 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1009 18:21:58.304521   34792 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1009 18:21:58.304532   34792 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1009 18:21:58.304547   34792 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1009 18:21:58.304557   34792 command_runner.go:130] > # log_size_max = -1
	I1009 18:21:58.304569   34792 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1009 18:21:58.304578   34792 command_runner.go:130] > # log_to_journald = false
	I1009 18:21:58.304601   34792 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1009 18:21:58.304614   34792 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1009 18:21:58.304622   34792 command_runner.go:130] > # Path to directory for container attach sockets.
	I1009 18:21:58.304634   34792 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1009 18:21:58.304647   34792 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1009 18:21:58.304657   34792 command_runner.go:130] > # bind_mount_prefix = ""
	I1009 18:21:58.304669   34792 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1009 18:21:58.304677   34792 command_runner.go:130] > # read_only = false
	I1009 18:21:58.304688   34792 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1009 18:21:58.304700   34792 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1009 18:21:58.304708   34792 command_runner.go:130] > # live configuration reload.
	I1009 18:21:58.304716   34792 command_runner.go:130] > # log_level = "info"
	I1009 18:21:58.304726   34792 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1009 18:21:58.304737   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.304746   34792 command_runner.go:130] > # log_filter = ""
	I1009 18:21:58.304761   34792 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1009 18:21:58.304773   34792 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1009 18:21:58.304781   34792 command_runner.go:130] > # separated by comma.
	I1009 18:21:58.304795   34792 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 18:21:58.304805   34792 command_runner.go:130] > # uid_mappings = ""
	I1009 18:21:58.304815   34792 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1009 18:21:58.304827   34792 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1009 18:21:58.304837   34792 command_runner.go:130] > # separated by comma.
	I1009 18:21:58.304849   34792 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 18:21:58.304863   34792 command_runner.go:130] > # gid_mappings = ""
	I1009 18:21:58.304890   34792 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1009 18:21:58.304904   34792 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1009 18:21:58.304916   34792 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1009 18:21:58.304929   34792 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 18:21:58.304939   34792 command_runner.go:130] > # minimum_mappable_uid = -1
	I1009 18:21:58.304949   34792 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1009 18:21:58.304961   34792 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1009 18:21:58.304971   34792 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1009 18:21:58.304986   34792 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 18:21:58.305032   34792 command_runner.go:130] > # minimum_mappable_gid = -1
	I1009 18:21:58.305045   34792 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1009 18:21:58.305054   34792 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1009 18:21:58.305063   34792 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1009 18:21:58.305074   34792 command_runner.go:130] > # ctr_stop_timeout = 30
	I1009 18:21:58.305084   34792 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1009 18:21:58.305097   34792 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1009 18:21:58.305106   34792 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1009 18:21:58.305116   34792 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1009 18:21:58.305124   34792 command_runner.go:130] > # drop_infra_ctr = true
	I1009 18:21:58.305148   34792 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1009 18:21:58.305162   34792 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1009 18:21:58.305177   34792 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1009 18:21:58.305185   34792 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1009 18:21:58.305197   34792 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1009 18:21:58.305209   34792 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1009 18:21:58.305222   34792 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1009 18:21:58.305233   34792 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1009 18:21:58.305241   34792 command_runner.go:130] > # shared_cpuset = ""
	I1009 18:21:58.305251   34792 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1009 18:21:58.305262   34792 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1009 18:21:58.305270   34792 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1009 18:21:58.305284   34792 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1009 18:21:58.305293   34792 command_runner.go:130] > # pinns_path = ""
	I1009 18:21:58.305305   34792 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1009 18:21:58.305318   34792 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1009 18:21:58.305328   34792 command_runner.go:130] > # enable_criu_support = true
	I1009 18:21:58.305337   34792 command_runner.go:130] > # Enable/disable the generation of the container,
	I1009 18:21:58.305350   34792 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1009 18:21:58.305359   34792 command_runner.go:130] > # enable_pod_events = false
	I1009 18:21:58.305371   34792 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1009 18:21:58.305382   34792 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1009 18:21:58.305389   34792 command_runner.go:130] > # default_runtime = "crun"
	I1009 18:21:58.305401   34792 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1009 18:21:58.305415   34792 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1009 18:21:58.305432   34792 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1009 18:21:58.305444   34792 command_runner.go:130] > # creation as a file is not desired either.
	I1009 18:21:58.305460   34792 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1009 18:21:58.305471   34792 command_runner.go:130] > # the hostname is being managed dynamically.
	I1009 18:21:58.305480   34792 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1009 18:21:58.305488   34792 command_runner.go:130] > # ]
	I1009 18:21:58.305499   34792 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1009 18:21:58.305512   34792 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1009 18:21:58.305524   34792 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1009 18:21:58.305535   34792 command_runner.go:130] > # Each entry in the table should follow the format:
	I1009 18:21:58.305542   34792 command_runner.go:130] > #
	I1009 18:21:58.305551   34792 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1009 18:21:58.305561   34792 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1009 18:21:58.305570   34792 command_runner.go:130] > # runtime_type = "oci"
	I1009 18:21:58.305582   34792 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1009 18:21:58.305590   34792 command_runner.go:130] > # inherit_default_runtime = false
	I1009 18:21:58.305601   34792 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1009 18:21:58.305611   34792 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1009 18:21:58.305619   34792 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1009 18:21:58.305628   34792 command_runner.go:130] > # monitor_env = []
	I1009 18:21:58.305638   34792 command_runner.go:130] > # privileged_without_host_devices = false
	I1009 18:21:58.305647   34792 command_runner.go:130] > # allowed_annotations = []
	I1009 18:21:58.305665   34792 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1009 18:21:58.305674   34792 command_runner.go:130] > # no_sync_log = false
	I1009 18:21:58.305681   34792 command_runner.go:130] > # default_annotations = {}
	I1009 18:21:58.305690   34792 command_runner.go:130] > # stream_websockets = false
	I1009 18:21:58.305697   34792 command_runner.go:130] > # seccomp_profile = ""
	I1009 18:21:58.305730   34792 command_runner.go:130] > # Where:
	I1009 18:21:58.305743   34792 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1009 18:21:58.305756   34792 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1009 18:21:58.305769   34792 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1009 18:21:58.305779   34792 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1009 18:21:58.305788   34792 command_runner.go:130] > #   in $PATH.
	I1009 18:21:58.305800   34792 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1009 18:21:58.305811   34792 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1009 18:21:58.305823   34792 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1009 18:21:58.305832   34792 command_runner.go:130] > #   state.
	I1009 18:21:58.305842   34792 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1009 18:21:58.305854   34792 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1009 18:21:58.305865   34792 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1009 18:21:58.305877   34792 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1009 18:21:58.305888   34792 command_runner.go:130] > #   the values from the default runtime on load time.
	I1009 18:21:58.305902   34792 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1009 18:21:58.305914   34792 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1009 18:21:58.305928   34792 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1009 18:21:58.305940   34792 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1009 18:21:58.305948   34792 command_runner.go:130] > #   The currently recognized values are:
	I1009 18:21:58.305962   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1009 18:21:58.305977   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1009 18:21:58.305989   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1009 18:21:58.306007   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1009 18:21:58.306022   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1009 18:21:58.306036   34792 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1009 18:21:58.306050   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1009 18:21:58.306061   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1009 18:21:58.306082   34792 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1009 18:21:58.306095   34792 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1009 18:21:58.306109   34792 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1009 18:21:58.306121   34792 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1009 18:21:58.306132   34792 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1009 18:21:58.306154   34792 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1009 18:21:58.306166   34792 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1009 18:21:58.306181   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1009 18:21:58.306194   34792 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1009 18:21:58.306204   34792 command_runner.go:130] > #   deprecated option "conmon".
	I1009 18:21:58.306216   34792 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1009 18:21:58.306226   34792 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1009 18:21:58.306240   34792 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1009 18:21:58.306250   34792 command_runner.go:130] > #   should be moved to the container's cgroup
	I1009 18:21:58.306260   34792 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1009 18:21:58.306271   34792 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1009 18:21:58.306285   34792 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1009 18:21:58.306294   34792 command_runner.go:130] > #   conmon-rs by using:
	I1009 18:21:58.306306   34792 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1009 18:21:58.306321   34792 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1009 18:21:58.306336   34792 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1009 18:21:58.306350   34792 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1009 18:21:58.306363   34792 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1009 18:21:58.306378   34792 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1009 18:21:58.306392   34792 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1009 18:21:58.306402   34792 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1009 18:21:58.306417   34792 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1009 18:21:58.306431   34792 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1009 18:21:58.306441   34792 command_runner.go:130] > #   when a machine crash happens.
	I1009 18:21:58.306452   34792 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1009 18:21:58.306467   34792 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1009 18:21:58.306481   34792 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1009 18:21:58.306492   34792 command_runner.go:130] > #   seccomp profile for the runtime.
	I1009 18:21:58.306506   34792 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1009 18:21:58.306520   34792 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1009 18:21:58.306525   34792 command_runner.go:130] > #
	I1009 18:21:58.306534   34792 command_runner.go:130] > # Using the seccomp notifier feature:
	I1009 18:21:58.306542   34792 command_runner.go:130] > #
	I1009 18:21:58.306552   34792 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1009 18:21:58.306565   34792 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1009 18:21:58.306574   34792 command_runner.go:130] > #
	I1009 18:21:58.306584   34792 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1009 18:21:58.306597   34792 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1009 18:21:58.306605   34792 command_runner.go:130] > #
	I1009 18:21:58.306615   34792 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1009 18:21:58.306623   34792 command_runner.go:130] > # feature.
	I1009 18:21:58.306629   34792 command_runner.go:130] > #
	I1009 18:21:58.306641   34792 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1009 18:21:58.306654   34792 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1009 18:21:58.306667   34792 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1009 18:21:58.306680   34792 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1009 18:21:58.306692   34792 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1009 18:21:58.306700   34792 command_runner.go:130] > #
	I1009 18:21:58.306710   34792 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1009 18:21:58.306723   34792 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1009 18:21:58.306730   34792 command_runner.go:130] > #
	I1009 18:21:58.306740   34792 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1009 18:21:58.306752   34792 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1009 18:21:58.306760   34792 command_runner.go:130] > #
	I1009 18:21:58.306770   34792 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1009 18:21:58.306782   34792 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1009 18:21:58.306788   34792 command_runner.go:130] > # limitation.
	I1009 18:21:58.306798   34792 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1009 18:21:58.306809   34792 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1009 18:21:58.306818   34792 command_runner.go:130] > runtime_type = ""
	I1009 18:21:58.306825   34792 command_runner.go:130] > runtime_root = "/run/crun"
	I1009 18:21:58.306837   34792 command_runner.go:130] > inherit_default_runtime = false
	I1009 18:21:58.306847   34792 command_runner.go:130] > runtime_config_path = ""
	I1009 18:21:58.306853   34792 command_runner.go:130] > container_min_memory = ""
	I1009 18:21:58.306863   34792 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1009 18:21:58.306870   34792 command_runner.go:130] > monitor_cgroup = "pod"
	I1009 18:21:58.306879   34792 command_runner.go:130] > monitor_exec_cgroup = ""
	I1009 18:21:58.306888   34792 command_runner.go:130] > allowed_annotations = [
	I1009 18:21:58.306898   34792 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1009 18:21:58.306904   34792 command_runner.go:130] > ]
	I1009 18:21:58.306914   34792 command_runner.go:130] > privileged_without_host_devices = false
	I1009 18:21:58.306921   34792 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1009 18:21:58.306931   34792 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1009 18:21:58.306937   34792 command_runner.go:130] > runtime_type = ""
	I1009 18:21:58.306944   34792 command_runner.go:130] > runtime_root = "/run/runc"
	I1009 18:21:58.306952   34792 command_runner.go:130] > inherit_default_runtime = false
	I1009 18:21:58.306962   34792 command_runner.go:130] > runtime_config_path = ""
	I1009 18:21:58.306970   34792 command_runner.go:130] > container_min_memory = ""
	I1009 18:21:58.306980   34792 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1009 18:21:58.306989   34792 command_runner.go:130] > monitor_cgroup = "pod"
	I1009 18:21:58.307006   34792 command_runner.go:130] > monitor_exec_cgroup = ""
	I1009 18:21:58.307017   34792 command_runner.go:130] > privileged_without_host_devices = false
	I1009 18:21:58.307031   34792 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1009 18:21:58.307040   34792 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1009 18:21:58.307053   34792 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1009 18:21:58.307068   34792 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1009 18:21:58.307088   34792 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1009 18:21:58.307107   34792 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1009 18:21:58.307121   34792 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1009 18:21:58.307130   34792 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1009 18:21:58.307160   34792 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1009 18:21:58.307179   34792 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1009 18:21:58.307192   34792 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1009 18:21:58.307206   34792 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1009 18:21:58.307215   34792 command_runner.go:130] > # Example:
	I1009 18:21:58.307224   34792 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1009 18:21:58.307234   34792 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1009 18:21:58.307244   34792 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1009 18:21:58.307253   34792 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1009 18:21:58.307262   34792 command_runner.go:130] > # cpuset = "0-1"
	I1009 18:21:58.307269   34792 command_runner.go:130] > # cpushares = "5"
	I1009 18:21:58.307278   34792 command_runner.go:130] > # cpuquota = "1000"
	I1009 18:21:58.307285   34792 command_runner.go:130] > # cpuperiod = "100000"
	I1009 18:21:58.307294   34792 command_runner.go:130] > # cpulimit = "35"
	I1009 18:21:58.307301   34792 command_runner.go:130] > # Where:
	I1009 18:21:58.307309   34792 command_runner.go:130] > # The workload name is workload-type.
	I1009 18:21:58.307323   34792 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1009 18:21:58.307336   34792 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1009 18:21:58.307349   34792 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1009 18:21:58.307365   34792 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1009 18:21:58.307377   34792 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1009 18:21:58.307388   34792 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1009 18:21:58.307399   34792 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1009 18:21:58.307410   34792 command_runner.go:130] > # Default value is set to true
	I1009 18:21:58.307418   34792 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1009 18:21:58.307430   34792 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1009 18:21:58.307440   34792 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1009 18:21:58.307449   34792 command_runner.go:130] > # Default value is set to 'false'
	I1009 18:21:58.307462   34792 command_runner.go:130] > # disable_hostport_mapping = false
	I1009 18:21:58.307474   34792 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1009 18:21:58.307487   34792 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1009 18:21:58.307495   34792 command_runner.go:130] > # timezone = ""
	I1009 18:21:58.307506   34792 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1009 18:21:58.307513   34792 command_runner.go:130] > #
	I1009 18:21:58.307523   34792 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1009 18:21:58.307536   34792 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1009 18:21:58.307544   34792 command_runner.go:130] > [crio.image]
	I1009 18:21:58.307556   34792 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1009 18:21:58.307566   34792 command_runner.go:130] > # default_transport = "docker://"
	I1009 18:21:58.307578   34792 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1009 18:21:58.307591   34792 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1009 18:21:58.307600   34792 command_runner.go:130] > # global_auth_file = ""
	I1009 18:21:58.307608   34792 command_runner.go:130] > # The image used to instantiate infra containers.
	I1009 18:21:58.307620   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.307630   34792 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1009 18:21:58.307641   34792 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1009 18:21:58.307654   34792 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1009 18:21:58.307665   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.307675   34792 command_runner.go:130] > # pause_image_auth_file = ""
	I1009 18:21:58.307686   34792 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1009 18:21:58.307698   34792 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1009 18:21:58.307708   34792 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1009 18:21:58.307719   34792 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1009 18:21:58.307727   34792 command_runner.go:130] > # pause_command = "/pause"
	I1009 18:21:58.307740   34792 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1009 18:21:58.307753   34792 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1009 18:21:58.307765   34792 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1009 18:21:58.307777   34792 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1009 18:21:58.307789   34792 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1009 18:21:58.307802   34792 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1009 18:21:58.307811   34792 command_runner.go:130] > # pinned_images = [
	I1009 18:21:58.307819   34792 command_runner.go:130] > # ]
	I1009 18:21:58.307830   34792 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1009 18:21:58.307842   34792 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1009 18:21:58.307855   34792 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1009 18:21:58.307868   34792 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1009 18:21:58.307879   34792 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1009 18:21:58.307887   34792 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1009 18:21:58.307899   34792 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1009 18:21:58.307912   34792 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1009 18:21:58.307930   34792 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1009 18:21:58.307943   34792 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1009 18:21:58.307955   34792 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1009 18:21:58.307971   34792 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1009 18:21:58.307982   34792 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1009 18:21:58.308001   34792 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1009 18:21:58.308010   34792 command_runner.go:130] > # changing them here.
	I1009 18:21:58.308020   34792 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1009 18:21:58.308029   34792 command_runner.go:130] > # insecure_registries = [
	I1009 18:21:58.308035   34792 command_runner.go:130] > # ]
	I1009 18:21:58.308049   34792 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1009 18:21:58.308059   34792 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1009 18:21:58.308067   34792 command_runner.go:130] > # image_volumes = "mkdir"
	I1009 18:21:58.308079   34792 command_runner.go:130] > # Temporary directory to use for storing big files
	I1009 18:21:58.308089   34792 command_runner.go:130] > # big_files_temporary_dir = ""
	I1009 18:21:58.308100   34792 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1009 18:21:58.308114   34792 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1009 18:21:58.308123   34792 command_runner.go:130] > # auto_reload_registries = false
	I1009 18:21:58.308133   34792 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1009 18:21:58.308163   34792 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1009 18:21:58.308174   34792 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1009 18:21:58.308183   34792 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1009 18:21:58.308191   34792 command_runner.go:130] > # The mode of short name resolution.
	I1009 18:21:58.308205   34792 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1009 18:21:58.308219   34792 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1009 18:21:58.308230   34792 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1009 18:21:58.308238   34792 command_runner.go:130] > # short_name_mode = "enforcing"
	I1009 18:21:58.308250   34792 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1009 18:21:58.308261   34792 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1009 18:21:58.308271   34792 command_runner.go:130] > # oci_artifact_mount_support = true
	I1009 18:21:58.308282   34792 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1009 18:21:58.308291   34792 command_runner.go:130] > # CNI plugins.
	I1009 18:21:58.308297   34792 command_runner.go:130] > [crio.network]
	I1009 18:21:58.308312   34792 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1009 18:21:58.308324   34792 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1009 18:21:58.308334   34792 command_runner.go:130] > # cni_default_network = ""
	I1009 18:21:58.308345   34792 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1009 18:21:58.308355   34792 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1009 18:21:58.308365   34792 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1009 18:21:58.308373   34792 command_runner.go:130] > # plugin_dirs = [
	I1009 18:21:58.308380   34792 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1009 18:21:58.308388   34792 command_runner.go:130] > # ]
	I1009 18:21:58.308395   34792 command_runner.go:130] > # List of included pod metrics.
	I1009 18:21:58.308404   34792 command_runner.go:130] > # included_pod_metrics = [
	I1009 18:21:58.308411   34792 command_runner.go:130] > # ]
	I1009 18:21:58.308423   34792 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1009 18:21:58.308429   34792 command_runner.go:130] > [crio.metrics]
	I1009 18:21:58.308440   34792 command_runner.go:130] > # Globally enable or disable metrics support.
	I1009 18:21:58.308447   34792 command_runner.go:130] > # enable_metrics = false
	I1009 18:21:58.308457   34792 command_runner.go:130] > # Specify enabled metrics collectors.
	I1009 18:21:58.308466   34792 command_runner.go:130] > # Per default all metrics are enabled.
	I1009 18:21:58.308479   34792 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1009 18:21:58.308492   34792 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1009 18:21:58.308504   34792 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1009 18:21:58.308514   34792 command_runner.go:130] > # metrics_collectors = [
	I1009 18:21:58.308520   34792 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1009 18:21:58.308525   34792 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1009 18:21:58.308530   34792 command_runner.go:130] > # 	"containers_oom_total",
	I1009 18:21:58.308535   34792 command_runner.go:130] > # 	"processes_defunct",
	I1009 18:21:58.308540   34792 command_runner.go:130] > # 	"operations_total",
	I1009 18:21:58.308546   34792 command_runner.go:130] > # 	"operations_latency_seconds",
	I1009 18:21:58.308553   34792 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1009 18:21:58.308560   34792 command_runner.go:130] > # 	"operations_errors_total",
	I1009 18:21:58.308567   34792 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1009 18:21:58.308574   34792 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1009 18:21:58.308581   34792 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1009 18:21:58.308590   34792 command_runner.go:130] > # 	"image_pulls_success_total",
	I1009 18:21:58.308598   34792 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1009 18:21:58.308605   34792 command_runner.go:130] > # 	"containers_oom_count_total",
	I1009 18:21:58.308613   34792 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1009 18:21:58.308620   34792 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1009 18:21:58.308630   34792 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1009 18:21:58.308635   34792 command_runner.go:130] > # ]
	I1009 18:21:58.308646   34792 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1009 18:21:58.308656   34792 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1009 18:21:58.308664   34792 command_runner.go:130] > # The port on which the metrics server will listen.
	I1009 18:21:58.308673   34792 command_runner.go:130] > # metrics_port = 9090
	I1009 18:21:58.308682   34792 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1009 18:21:58.308691   34792 command_runner.go:130] > # metrics_socket = ""
	I1009 18:21:58.308699   34792 command_runner.go:130] > # The certificate for the secure metrics server.
	I1009 18:21:58.308713   34792 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1009 18:21:58.308726   34792 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1009 18:21:58.308736   34792 command_runner.go:130] > # certificate on any modification event.
	I1009 18:21:58.308743   34792 command_runner.go:130] > # metrics_cert = ""
	I1009 18:21:58.308754   34792 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1009 18:21:58.308765   34792 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1009 18:21:58.308774   34792 command_runner.go:130] > # metrics_key = ""
	I1009 18:21:58.308785   34792 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1009 18:21:58.308793   34792 command_runner.go:130] > [crio.tracing]
	I1009 18:21:58.308803   34792 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1009 18:21:58.308812   34792 command_runner.go:130] > # enable_tracing = false
	I1009 18:21:58.308821   34792 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1009 18:21:58.308831   34792 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1009 18:21:58.308842   34792 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1009 18:21:58.308854   34792 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1009 18:21:58.308864   34792 command_runner.go:130] > # CRI-O NRI configuration.
	I1009 18:21:58.308871   34792 command_runner.go:130] > [crio.nri]
	I1009 18:21:58.308879   34792 command_runner.go:130] > # Globally enable or disable NRI.
	I1009 18:21:58.308888   34792 command_runner.go:130] > # enable_nri = true
	I1009 18:21:58.308908   34792 command_runner.go:130] > # NRI socket to listen on.
	I1009 18:21:58.308919   34792 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1009 18:21:58.308926   34792 command_runner.go:130] > # NRI plugin directory to use.
	I1009 18:21:58.308934   34792 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1009 18:21:58.308945   34792 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1009 18:21:58.308955   34792 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1009 18:21:58.308967   34792 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1009 18:21:58.309020   34792 command_runner.go:130] > # nri_disable_connections = false
	I1009 18:21:58.309031   34792 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1009 18:21:58.309039   34792 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1009 18:21:58.309050   34792 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1009 18:21:58.309060   34792 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1009 18:21:58.309070   34792 command_runner.go:130] > # NRI default validator configuration.
	I1009 18:21:58.309081   34792 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1009 18:21:58.309094   34792 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1009 18:21:58.309105   34792 command_runner.go:130] > # can be restricted/rejected:
	I1009 18:21:58.309114   34792 command_runner.go:130] > # - OCI hook injection
	I1009 18:21:58.309123   34792 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1009 18:21:58.309144   34792 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1009 18:21:58.309154   34792 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1009 18:21:58.309164   34792 command_runner.go:130] > # - adjustment of linux namespaces
	I1009 18:21:58.309174   34792 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1009 18:21:58.309187   34792 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1009 18:21:58.309199   34792 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1009 18:21:58.309206   34792 command_runner.go:130] > #
	I1009 18:21:58.309213   34792 command_runner.go:130] > # [crio.nri.default_validator]
	I1009 18:21:58.309228   34792 command_runner.go:130] > # nri_enable_default_validator = false
	I1009 18:21:58.309239   34792 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1009 18:21:58.309249   34792 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1009 18:21:58.309259   34792 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1009 18:21:58.309270   34792 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1009 18:21:58.309282   34792 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1009 18:21:58.309292   34792 command_runner.go:130] > # nri_validator_required_plugins = [
	I1009 18:21:58.309300   34792 command_runner.go:130] > # ]
	I1009 18:21:58.309310   34792 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1009 18:21:58.309320   34792 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1009 18:21:58.309329   34792 command_runner.go:130] > [crio.stats]
	I1009 18:21:58.309338   34792 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1009 18:21:58.309350   34792 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1009 18:21:58.309361   34792 command_runner.go:130] > # stats_collection_period = 0
	I1009 18:21:58.309373   34792 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1009 18:21:58.309386   34792 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1009 18:21:58.309395   34792 command_runner.go:130] > # collection_period = 0
	I1009 18:21:58.309439   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.287848676Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1009 18:21:58.309455   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.287874416Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1009 18:21:58.309486   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.28789246Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1009 18:21:58.309504   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.287909281Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1009 18:21:58.309520   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.287966347Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:58.309548   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.288147535Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1009 18:21:58.309568   34792 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1009 18:21:58.309652   34792 cni.go:84] Creating CNI manager for ""
	I1009 18:21:58.309667   34792 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:21:58.309686   34792 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:21:58.309718   34792 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753440 NodeName:functional-753440 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:21:58.309867   34792 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-753440"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:21:58.309941   34792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:21:58.317943   34792 command_runner.go:130] > kubeadm
	I1009 18:21:58.317964   34792 command_runner.go:130] > kubectl
	I1009 18:21:58.317972   34792 command_runner.go:130] > kubelet
	I1009 18:21:58.317992   34792 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:21:58.318041   34792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:21:58.325700   34792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 18:21:58.338455   34792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:21:58.350701   34792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1009 18:21:58.362930   34792 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 18:21:58.366724   34792 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1009 18:21:58.366809   34792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:21:58.451602   34792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:21:58.464478   34792 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440 for IP: 192.168.49.2
	I1009 18:21:58.464503   34792 certs.go:195] generating shared ca certs ...
	I1009 18:21:58.464518   34792 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:21:58.464657   34792 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 18:21:58.464699   34792 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 18:21:58.464708   34792 certs.go:257] generating profile certs ...
	I1009 18:21:58.464789   34792 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.key
	I1009 18:21:58.464832   34792 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.key.01289d3a
	I1009 18:21:58.464870   34792 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.key
	I1009 18:21:58.464880   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 18:21:58.464891   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 18:21:58.464904   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 18:21:58.464914   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 18:21:58.464926   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 18:21:58.464938   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 18:21:58.464950   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 18:21:58.464961   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 18:21:58.465007   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 18:21:58.465033   34792 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 18:21:58.465040   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:21:58.465060   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 18:21:58.465083   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:21:58.465117   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 18:21:58.465182   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:21:58.465212   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem -> /usr/share/ca-certificates/14880.pem
	I1009 18:21:58.465226   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /usr/share/ca-certificates/148802.pem
	I1009 18:21:58.465252   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:21:58.465730   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:21:58.483386   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:21:58.500383   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:21:58.517315   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:21:58.533903   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 18:21:58.550845   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:21:58.567242   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:21:58.584667   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:21:58.601626   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 18:21:58.618749   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 18:21:58.635789   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:21:58.652270   34792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:21:58.664508   34792 ssh_runner.go:195] Run: openssl version
	I1009 18:21:58.670569   34792 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1009 18:21:58.670643   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:21:58.679189   34792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:21:58.683037   34792 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:21:58.683067   34792 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:21:58.683111   34792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:21:58.716325   34792 command_runner.go:130] > b5213941
	I1009 18:21:58.716574   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:21:58.724647   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 18:21:58.732750   34792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 18:21:58.736237   34792 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:21:58.736342   34792 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:21:58.736392   34792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 18:21:58.769488   34792 command_runner.go:130] > 51391683
	I1009 18:21:58.769675   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 18:21:58.778213   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 18:21:58.786758   34792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 18:21:58.790431   34792 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:21:58.790472   34792 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:21:58.790516   34792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 18:21:58.824579   34792 command_runner.go:130] > 3ec20f2e
	I1009 18:21:58.824670   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:21:58.832975   34792 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:21:58.836722   34792 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:21:58.836745   34792 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1009 18:21:58.836750   34792 command_runner.go:130] > Device: 8,1	Inode: 583629      Links: 1
	I1009 18:21:58.836756   34792 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 18:21:58.836762   34792 command_runner.go:130] > Access: 2025-10-09 18:17:52.024667536 +0000
	I1009 18:21:58.836766   34792 command_runner.go:130] > Modify: 2025-10-09 18:13:46.346674317 +0000
	I1009 18:21:58.836771   34792 command_runner.go:130] > Change: 2025-10-09 18:13:46.346674317 +0000
	I1009 18:21:58.836775   34792 command_runner.go:130] >  Birth: 2025-10-09 18:13:46.346674317 +0000
	I1009 18:21:58.836829   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 18:21:58.871297   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:58.871384   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 18:21:58.905951   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:58.906293   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 18:21:58.941072   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:58.941180   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 18:21:58.975637   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:58.975713   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 18:21:59.010686   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:59.010763   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 18:21:59.045288   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:59.045372   34792 kubeadm.go:400] StartCluster: {Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:21:59.045468   34792 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:21:59.045548   34792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:21:59.072734   34792 cri.go:89] found id: ""
	I1009 18:21:59.072811   34792 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:21:59.080291   34792 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1009 18:21:59.080312   34792 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1009 18:21:59.080317   34792 command_runner.go:130] > /var/lib/minikube/etcd:
	I1009 18:21:59.080960   34792 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 18:21:59.080977   34792 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 18:21:59.081028   34792 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 18:21:59.088791   34792 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:21:59.088891   34792 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-753440" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:21:59.088923   34792 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-11374/kubeconfig needs updating (will repair): [kubeconfig missing "functional-753440" cluster setting kubeconfig missing "functional-753440" context setting]
	I1009 18:21:59.089226   34792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/kubeconfig: {Name:mke7bf8fc0811179129dfd61e3a963860adf8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:21:59.115972   34792 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:21:59.116113   34792 kapi.go:59] client config for functional-753440: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 18:21:59.116551   34792 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 18:21:59.116565   34792 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 18:21:59.116570   34792 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 18:21:59.116574   34792 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 18:21:59.116578   34792 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 18:21:59.116681   34792 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 18:21:59.116939   34792 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 18:21:59.125251   34792 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 18:21:59.125284   34792 kubeadm.go:601] duration metric: took 44.302105ms to restartPrimaryControlPlane
	I1009 18:21:59.125294   34792 kubeadm.go:402] duration metric: took 79.928873ms to StartCluster
	I1009 18:21:59.125313   34792 settings.go:142] acquiring lock: {Name:mke1fc24bd3c282bdce5b5999d4611ed242ead0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:21:59.125417   34792 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:21:59.125977   34792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/kubeconfig: {Name:mke7bf8fc0811179129dfd61e3a963860adf8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:21:59.126266   34792 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:21:59.126330   34792 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 18:21:59.126472   34792 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:21:59.126485   34792 addons.go:69] Setting default-storageclass=true in profile "functional-753440"
	I1009 18:21:59.126503   34792 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-753440"
	I1009 18:21:59.126475   34792 addons.go:69] Setting storage-provisioner=true in profile "functional-753440"
	I1009 18:21:59.126533   34792 addons.go:238] Setting addon storage-provisioner=true in "functional-753440"
	I1009 18:21:59.126575   34792 host.go:66] Checking if "functional-753440" exists ...
	I1009 18:21:59.126787   34792 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
	I1009 18:21:59.126953   34792 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
	I1009 18:21:59.129433   34792 out.go:179] * Verifying Kubernetes components...
	I1009 18:21:59.130694   34792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:21:59.147348   34792 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:21:59.147489   34792 kapi.go:59] client config for functional-753440: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 18:21:59.147681   34792 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 18:21:59.147763   34792 addons.go:238] Setting addon default-storageclass=true in "functional-753440"
	I1009 18:21:59.147799   34792 host.go:66] Checking if "functional-753440" exists ...
	I1009 18:21:59.148103   34792 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
	I1009 18:21:59.149131   34792 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:21:59.149169   34792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 18:21:59.149223   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:59.172020   34792 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 18:21:59.172047   34792 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 18:21:59.172108   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:59.172953   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:59.190936   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:59.227445   34792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:21:59.240811   34792 node_ready.go:35] waiting up to 6m0s for node "functional-753440" to be "Ready" ...
	I1009 18:21:59.240954   34792 type.go:168] "Request Body" body=""
	I1009 18:21:59.241028   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:21:59.241430   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:21:59.284375   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:21:59.300190   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:21:59.338559   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.338609   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.338653   34792 retry.go:31] will retry after 183.514108ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.353053   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.353121   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.353157   34792 retry.go:31] will retry after 252.751171ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.522422   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:21:59.573424   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.575988   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.576058   34792 retry.go:31] will retry after 293.779687ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.606194   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:21:59.660438   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.660484   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.660501   34792 retry.go:31] will retry after 279.387954ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.741722   34792 type.go:168] "Request Body" body=""
	I1009 18:21:59.741829   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:21:59.742206   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:21:59.870497   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:21:59.921333   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.923563   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.923589   34792 retry.go:31] will retry after 737.997993ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.940822   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:21:59.989898   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.992209   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.992239   34792 retry.go:31] will retry after 533.533276ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:00.241740   34792 type.go:168] "Request Body" body=""
	I1009 18:22:00.241807   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:00.242177   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:00.526746   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:00.575738   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:00.578103   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:00.578131   34792 retry.go:31] will retry after 930.387704ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:00.662455   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:00.715389   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:00.715427   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:00.715452   34792 retry.go:31] will retry after 867.874306ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:00.741572   34792 type.go:168] "Request Body" body=""
	I1009 18:22:00.741637   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:00.741979   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:01.241687   34792 type.go:168] "Request Body" body=""
	I1009 18:22:01.241751   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:01.242091   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:01.242159   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:01.509541   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:01.558188   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:01.560577   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:01.560605   34792 retry.go:31] will retry after 1.199996419s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:01.583824   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:01.634758   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:01.634811   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:01.634834   34792 retry.go:31] will retry after 674.661756ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:01.741022   34792 type.go:168] "Request Body" body=""
	I1009 18:22:01.741106   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:01.741428   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:02.241242   34792 type.go:168] "Request Body" body=""
	I1009 18:22:02.241329   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:02.241689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:02.309923   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:02.359167   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:02.361481   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:02.361513   34792 retry.go:31] will retry after 1.255051156s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:02.741014   34792 type.go:168] "Request Body" body=""
	I1009 18:22:02.741086   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:02.741469   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:02.761694   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:02.809418   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:02.811709   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:02.811735   34792 retry.go:31] will retry after 2.010356843s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:03.241312   34792 type.go:168] "Request Body" body=""
	I1009 18:22:03.241377   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:03.241665   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:03.617237   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:03.670575   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:03.670619   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:03.670643   34792 retry.go:31] will retry after 3.029315393s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:03.741894   34792 type.go:168] "Request Body" body=""
	I1009 18:22:03.741959   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:03.742307   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:03.742368   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:04.241167   34792 type.go:168] "Request Body" body=""
	I1009 18:22:04.241255   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:04.241616   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:04.741405   34792 type.go:168] "Request Body" body=""
	I1009 18:22:04.741470   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:04.741793   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:04.823125   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:04.874252   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:04.876942   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:04.876977   34792 retry.go:31] will retry after 2.337146666s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:05.241523   34792 type.go:168] "Request Body" body=""
	I1009 18:22:05.241603   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:05.241925   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:05.741876   34792 type.go:168] "Request Body" body=""
	I1009 18:22:05.741944   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:05.742306   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:06.241056   34792 type.go:168] "Request Body" body=""
	I1009 18:22:06.241120   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:06.241524   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:06.241591   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:06.701185   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:06.741960   34792 type.go:168] "Request Body" body=""
	I1009 18:22:06.742030   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:06.742348   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:06.753588   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:06.753625   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:06.753645   34792 retry.go:31] will retry after 5.067292314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:07.214286   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:07.241989   34792 type.go:168] "Request Body" body=""
	I1009 18:22:07.242085   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:07.242465   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:07.267576   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:07.267619   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:07.267638   34792 retry.go:31] will retry after 3.639407023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:07.741211   34792 type.go:168] "Request Body" body=""
	I1009 18:22:07.741279   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:07.741611   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:08.241376   34792 type.go:168] "Request Body" body=""
	I1009 18:22:08.241468   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:08.241797   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:08.241859   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:08.741654   34792 type.go:168] "Request Body" body=""
	I1009 18:22:08.741723   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:08.742130   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:09.241911   34792 type.go:168] "Request Body" body=""
	I1009 18:22:09.241978   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:09.242356   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:09.742012   34792 type.go:168] "Request Body" body=""
	I1009 18:22:09.742100   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:09.742487   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:10.241171   34792 type.go:168] "Request Body" body=""
	I1009 18:22:10.241238   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:10.241608   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:10.741552   34792 type.go:168] "Request Body" body=""
	I1009 18:22:10.741634   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:10.741987   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:10.742077   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:10.907343   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:10.958356   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:10.960749   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:10.960774   34792 retry.go:31] will retry after 7.184910667s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:11.241202   34792 type.go:168] "Request Body" body=""
	I1009 18:22:11.241304   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:11.241646   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:11.741253   34792 type.go:168] "Request Body" body=""
	I1009 18:22:11.741393   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:11.741703   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:11.821955   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:11.870785   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:11.873227   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:11.873260   34792 retry.go:31] will retry after 9.534535371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:12.241850   34792 type.go:168] "Request Body" body=""
	I1009 18:22:12.241915   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:12.242244   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:12.741040   34792 type.go:168] "Request Body" body=""
	I1009 18:22:12.741121   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:12.741476   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:13.241242   34792 type.go:168] "Request Body" body=""
	I1009 18:22:13.241344   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:13.241681   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:13.241752   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:13.741448   34792 type.go:168] "Request Body" body=""
	I1009 18:22:13.741557   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:13.741881   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:14.241703   34792 type.go:168] "Request Body" body=""
	I1009 18:22:14.241767   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:14.242071   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:14.741971   34792 type.go:168] "Request Body" body=""
	I1009 18:22:14.742058   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:14.742415   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:15.241162   34792 type.go:168] "Request Body" body=""
	I1009 18:22:15.241227   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:15.241543   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:15.741329   34792 type.go:168] "Request Body" body=""
	I1009 18:22:15.741396   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:15.741713   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:15.741779   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:16.241461   34792 type.go:168] "Request Body" body=""
	I1009 18:22:16.241527   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:16.241841   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:16.741694   34792 type.go:168] "Request Body" body=""
	I1009 18:22:16.741756   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:16.742072   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:17.241938   34792 type.go:168] "Request Body" body=""
	I1009 18:22:17.242012   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:17.242354   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:17.741119   34792 type.go:168] "Request Body" body=""
	I1009 18:22:17.741209   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:17.741520   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:18.146014   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:18.197672   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:18.200076   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:18.200108   34792 retry.go:31] will retry after 13.416592948s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:18.241338   34792 type.go:168] "Request Body" body=""
	I1009 18:22:18.241421   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:18.241742   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:18.241815   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:18.741635   34792 type.go:168] "Request Body" body=""
	I1009 18:22:18.741716   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:18.742048   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:19.241915   34792 type.go:168] "Request Body" body=""
	I1009 18:22:19.241986   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:19.242351   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:19.741113   34792 type.go:168] "Request Body" body=""
	I1009 18:22:19.741223   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:19.741558   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:20.241266   34792 type.go:168] "Request Body" body=""
	I1009 18:22:20.241372   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:20.241689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:20.741538   34792 type.go:168] "Request Body" body=""
	I1009 18:22:20.741648   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:20.742078   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:20.742168   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:21.241982   34792 type.go:168] "Request Body" body=""
	I1009 18:22:21.242072   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:21.242428   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:21.408800   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:21.460386   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:21.460443   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:21.460465   34792 retry.go:31] will retry after 6.196258431s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:21.741894   34792 type.go:168] "Request Body" body=""
	I1009 18:22:21.741973   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:21.742340   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:22.241109   34792 type.go:168] "Request Body" body=""
	I1009 18:22:22.241216   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:22.241540   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:22.741267   34792 type.go:168] "Request Body" body=""
	I1009 18:22:22.741362   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:22.741668   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:23.241400   34792 type.go:168] "Request Body" body=""
	I1009 18:22:23.241466   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:23.241777   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:23.241839   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:23.741636   34792 type.go:168] "Request Body" body=""
	I1009 18:22:23.741720   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:23.742032   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:24.241849   34792 type.go:168] "Request Body" body=""
	I1009 18:22:24.241912   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:24.242229   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:24.740969   34792 type.go:168] "Request Body" body=""
	I1009 18:22:24.741034   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:24.741359   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:25.241097   34792 type.go:168] "Request Body" body=""
	I1009 18:22:25.241186   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:25.241506   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:25.741317   34792 type.go:168] "Request Body" body=""
	I1009 18:22:25.741384   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:25.741717   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:25.741785   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:26.241467   34792 type.go:168] "Request Body" body=""
	I1009 18:22:26.241530   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:26.241836   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:26.741641   34792 type.go:168] "Request Body" body=""
	I1009 18:22:26.741717   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:26.742054   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:27.241867   34792 type.go:168] "Request Body" body=""
	I1009 18:22:27.241935   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:27.242289   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:27.657912   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:27.709732   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:27.709776   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:27.709796   34792 retry.go:31] will retry after 21.104663041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:27.741976   34792 type.go:168] "Request Body" body=""
	I1009 18:22:27.742060   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:27.742387   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:27.742447   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:28.241206   34792 type.go:168] "Request Body" body=""
	I1009 18:22:28.241272   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:28.241641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:28.741374   34792 type.go:168] "Request Body" body=""
	I1009 18:22:28.741445   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:28.741741   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:29.241532   34792 type.go:168] "Request Body" body=""
	I1009 18:22:29.241600   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:29.241930   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:29.741720   34792 type.go:168] "Request Body" body=""
	I1009 18:22:29.741782   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:29.742115   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:30.241968   34792 type.go:168] "Request Body" body=""
	I1009 18:22:30.242038   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:30.242354   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:30.242406   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:30.741168   34792 type.go:168] "Request Body" body=""
	I1009 18:22:30.741235   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:30.741522   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:31.241253   34792 type.go:168] "Request Body" body=""
	I1009 18:22:31.241332   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:31.241693   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:31.617269   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:31.669784   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:31.669834   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:31.669851   34792 retry.go:31] will retry after 15.154475243s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:31.740998   34792 type.go:168] "Request Body" body=""
	I1009 18:22:31.741063   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:31.741420   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:32.241118   34792 type.go:168] "Request Body" body=""
	I1009 18:22:32.241207   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:32.241526   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:32.741162   34792 type.go:168] "Request Body" body=""
	I1009 18:22:32.741230   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:32.741578   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:32.741636   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:33.241206   34792 type.go:168] "Request Body" body=""
	I1009 18:22:33.241273   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:33.241600   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:33.741209   34792 type.go:168] "Request Body" body=""
	I1009 18:22:33.741274   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:33.741593   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:34.241252   34792 type.go:168] "Request Body" body=""
	I1009 18:22:34.241319   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:34.241629   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:34.741297   34792 type.go:168] "Request Body" body=""
	I1009 18:22:34.741366   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:34.741662   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:34.741714   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:35.241258   34792 type.go:168] "Request Body" body=""
	I1009 18:22:35.241319   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:35.241631   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:35.741518   34792 type.go:168] "Request Body" body=""
	I1009 18:22:35.741590   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:35.741908   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:36.241473   34792 type.go:168] "Request Body" body=""
	I1009 18:22:36.241537   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:36.241867   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:36.741507   34792 type.go:168] "Request Body" body=""
	I1009 18:22:36.741582   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:36.741900   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:36.741954   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:37.241503   34792 type.go:168] "Request Body" body=""
	I1009 18:22:37.241570   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:37.241880   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:37.741492   34792 type.go:168] "Request Body" body=""
	I1009 18:22:37.741564   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:37.741883   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:38.241508   34792 type.go:168] "Request Body" body=""
	I1009 18:22:38.241573   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:38.241878   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:38.741474   34792 type.go:168] "Request Body" body=""
	I1009 18:22:38.741571   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:38.741868   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:39.241856   34792 type.go:168] "Request Body" body=""
	I1009 18:22:39.241916   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:39.242237   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:39.242300   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:39.741898   34792 type.go:168] "Request Body" body=""
	I1009 18:22:39.741969   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:39.742303   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:40.241969   34792 type.go:168] "Request Body" body=""
	I1009 18:22:40.242062   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:40.242400   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:40.741170   34792 type.go:168] "Request Body" body=""
	I1009 18:22:40.741238   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:40.741556   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:41.241169   34792 type.go:168] "Request Body" body=""
	I1009 18:22:41.241235   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:41.241568   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:41.741187   34792 type.go:168] "Request Body" body=""
	I1009 18:22:41.741253   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:41.741589   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:41.741643   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:42.241206   34792 type.go:168] "Request Body" body=""
	I1009 18:22:42.241272   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:42.241611   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:42.741205   34792 type.go:168] "Request Body" body=""
	I1009 18:22:42.741278   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:42.741595   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:43.241190   34792 type.go:168] "Request Body" body=""
	I1009 18:22:43.241258   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:43.241582   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:43.741198   34792 type.go:168] "Request Body" body=""
	I1009 18:22:43.741263   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:43.741575   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:44.241202   34792 type.go:168] "Request Body" body=""
	I1009 18:22:44.241263   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:44.241577   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:44.241629   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:44.741212   34792 type.go:168] "Request Body" body=""
	I1009 18:22:44.741283   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:44.741598   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:45.241235   34792 type.go:168] "Request Body" body=""
	I1009 18:22:45.241301   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:45.241671   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:45.741562   34792 type.go:168] "Request Body" body=""
	I1009 18:22:45.741629   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:45.741942   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:46.241628   34792 type.go:168] "Request Body" body=""
	I1009 18:22:46.241692   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:46.241993   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:46.242063   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:46.741676   34792 type.go:168] "Request Body" body=""
	I1009 18:22:46.741745   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:46.742077   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:46.825331   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:46.875678   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:46.878302   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:46.878331   34792 retry.go:31] will retry after 24.753743157s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:47.241842   34792 type.go:168] "Request Body" body=""
	I1009 18:22:47.241915   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:47.242245   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:47.741025   34792 type.go:168] "Request Body" body=""
	I1009 18:22:47.741128   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:47.741463   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:48.241206   34792 type.go:168] "Request Body" body=""
	I1009 18:22:48.241284   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:48.241641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:48.741361   34792 type.go:168] "Request Body" body=""
	I1009 18:22:48.741434   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:48.741764   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:48.741814   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:48.815023   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:48.866903   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:48.866953   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:48.866975   34792 retry.go:31] will retry after 23.693621864s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:49.241681   34792 type.go:168] "Request Body" body=""
	I1009 18:22:49.241760   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:49.242189   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:49.741809   34792 type.go:168] "Request Body" body=""
	I1009 18:22:49.741872   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:49.742216   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:50.241969   34792 type.go:168] "Request Body" body=""
	I1009 18:22:50.242049   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:50.242406   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:50.741244   34792 type.go:168] "Request Body" body=""
	I1009 18:22:50.741312   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:50.741658   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:51.241250   34792 type.go:168] "Request Body" body=""
	I1009 18:22:51.241336   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:51.241653   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:51.241707   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:51.741250   34792 type.go:168] "Request Body" body=""
	I1009 18:22:51.741317   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:51.741731   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:52.241243   34792 type.go:168] "Request Body" body=""
	I1009 18:22:52.241341   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:52.241668   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:52.741254   34792 type.go:168] "Request Body" body=""
	I1009 18:22:52.741378   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:52.741687   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:53.241293   34792 type.go:168] "Request Body" body=""
	I1009 18:22:53.241355   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:53.241674   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:53.241725   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:53.741263   34792 type.go:168] "Request Body" body=""
	I1009 18:22:53.741330   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:53.741640   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:54.241249   34792 type.go:168] "Request Body" body=""
	I1009 18:22:54.241329   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:54.241652   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:54.741260   34792 type.go:168] "Request Body" body=""
	I1009 18:22:54.741337   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:54.741654   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:55.241278   34792 type.go:168] "Request Body" body=""
	I1009 18:22:55.241342   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:55.241675   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:55.741565   34792 type.go:168] "Request Body" body=""
	I1009 18:22:55.741632   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:55.741942   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:55.741993   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:56.241590   34792 type.go:168] "Request Body" body=""
	I1009 18:22:56.241657   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:56.241967   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:56.741618   34792 type.go:168] "Request Body" body=""
	I1009 18:22:56.741686   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:56.742001   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:57.241690   34792 type.go:168] "Request Body" body=""
	I1009 18:22:57.241747   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:57.242085   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:57.741794   34792 type.go:168] "Request Body" body=""
	I1009 18:22:57.741866   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:57.742231   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:57.742290   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:58.241896   34792 type.go:168] "Request Body" body=""
	I1009 18:22:58.241964   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:58.242341   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:58.740987   34792 type.go:168] "Request Body" body=""
	I1009 18:22:58.741057   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:58.741430   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:59.241270   34792 type.go:168] "Request Body" body=""
	I1009 18:22:59.241374   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:59.241705   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:59.741305   34792 type.go:168] "Request Body" body=""
	I1009 18:22:59.741378   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:59.741671   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:00.241318   34792 type.go:168] "Request Body" body=""
	I1009 18:23:00.241386   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:00.241730   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:00.241783   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:00.741584   34792 type.go:168] "Request Body" body=""
	I1009 18:23:00.741655   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:00.741970   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:01.241670   34792 type.go:168] "Request Body" body=""
	I1009 18:23:01.241740   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:01.242056   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:01.741725   34792 type.go:168] "Request Body" body=""
	I1009 18:23:01.741789   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:01.742109   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:02.241790   34792 type.go:168] "Request Body" body=""
	I1009 18:23:02.241853   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:02.242215   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:02.242270   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:02.741914   34792 type.go:168] "Request Body" body=""
	I1009 18:23:02.741984   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:02.742352   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:03.242008   34792 type.go:168] "Request Body" body=""
	I1009 18:23:03.242088   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:03.242455   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:03.741186   34792 type.go:168] "Request Body" body=""
	I1009 18:23:03.741250   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:03.741576   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:04.241269   34792 type.go:168] "Request Body" body=""
	I1009 18:23:04.241341   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:04.241673   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:04.741396   34792 type.go:168] "Request Body" body=""
	I1009 18:23:04.741460   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:04.741772   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:04.741828   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:05.241582   34792 type.go:168] "Request Body" body=""
	I1009 18:23:05.241646   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:05.241956   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:05.741882   34792 type.go:168] "Request Body" body=""
	I1009 18:23:05.741951   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:05.742320   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:06.241065   34792 type.go:168] "Request Body" body=""
	I1009 18:23:06.241173   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:06.241497   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:06.741232   34792 type.go:168] "Request Body" body=""
	I1009 18:23:06.741295   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:06.741640   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:07.241402   34792 type.go:168] "Request Body" body=""
	I1009 18:23:07.241487   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:07.241813   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:07.241865   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:07.741620   34792 type.go:168] "Request Body" body=""
	I1009 18:23:07.741692   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:07.742021   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:08.241855   34792 type.go:168] "Request Body" body=""
	I1009 18:23:08.241917   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:08.242226   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:08.741000   34792 type.go:168] "Request Body" body=""
	I1009 18:23:08.741070   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:08.741419   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:09.241169   34792 type.go:168] "Request Body" body=""
	I1009 18:23:09.241236   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:09.241556   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:09.741160   34792 type.go:168] "Request Body" body=""
	I1009 18:23:09.741223   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:09.741542   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:09.741611   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:10.241116   34792 type.go:168] "Request Body" body=""
	I1009 18:23:10.241215   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:10.241545   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:10.741472   34792 type.go:168] "Request Body" body=""
	I1009 18:23:10.741586   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:10.741912   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:11.241739   34792 type.go:168] "Request Body" body=""
	I1009 18:23:11.241829   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:11.242195   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:11.632645   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:23:11.684065   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:23:11.686606   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:23:11.686651   34792 retry.go:31] will retry after 43.228082894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:23:11.741902   34792 type.go:168] "Request Body" body=""
	I1009 18:23:11.741967   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:11.742335   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:11.742398   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:12.241111   34792 type.go:168] "Request Body" body=""
	I1009 18:23:12.241221   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:12.241543   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:12.560933   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:23:12.614798   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:23:12.614843   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:23:12.614940   34792 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 18:23:12.741072   34792 type.go:168] "Request Body" body=""
	I1009 18:23:12.741169   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:12.741484   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:13.241057   34792 type.go:168] "Request Body" body=""
	I1009 18:23:13.241192   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:13.241516   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:13.741110   34792 type.go:168] "Request Body" body=""
	I1009 18:23:13.741196   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:13.741493   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:14.241244   34792 type.go:168] "Request Body" body=""
	I1009 18:23:14.241314   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:14.241686   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:14.241738   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:14.741425   34792 type.go:168] "Request Body" body=""
	I1009 18:23:14.741488   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:14.741803   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:15.241603   34792 type.go:168] "Request Body" body=""
	I1009 18:23:15.241664   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:15.241993   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:15.741872   34792 type.go:168] "Request Body" body=""
	I1009 18:23:15.741942   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:15.742284   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:16.241004   34792 type.go:168] "Request Body" body=""
	I1009 18:23:16.241108   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:16.241472   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:16.741281   34792 type.go:168] "Request Body" body=""
	I1009 18:23:16.741357   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:16.741657   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:16.741710   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:17.241427   34792 type.go:168] "Request Body" body=""
	I1009 18:23:17.241489   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:17.241829   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:17.741674   34792 type.go:168] "Request Body" body=""
	I1009 18:23:17.741762   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:17.742082   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:18.241893   34792 type.go:168] "Request Body" body=""
	I1009 18:23:18.241965   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:18.242388   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:18.741175   34792 type.go:168] "Request Body" body=""
	I1009 18:23:18.741239   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:18.741553   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:19.241408   34792 type.go:168] "Request Body" body=""
	I1009 18:23:19.241483   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:19.241852   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:19.241908   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:19.741678   34792 type.go:168] "Request Body" body=""
	I1009 18:23:19.741745   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:19.742039   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:20.241909   34792 type.go:168] "Request Body" body=""
	I1009 18:23:20.241972   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:20.242406   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:20.741268   34792 type.go:168] "Request Body" body=""
	I1009 18:23:20.741334   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:20.741646   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:21.241394   34792 type.go:168] "Request Body" body=""
	I1009 18:23:21.241459   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:21.241801   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:21.741624   34792 type.go:168] "Request Body" body=""
	I1009 18:23:21.741688   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:21.741997   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:21.742063   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:22.241916   34792 type.go:168] "Request Body" body=""
	I1009 18:23:22.241978   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:22.242380   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:22.741197   34792 type.go:168] "Request Body" body=""
	I1009 18:23:22.741265   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:22.741575   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:23.241312   34792 type.go:168] "Request Body" body=""
	I1009 18:23:23.241382   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:23.241731   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:23.741463   34792 type.go:168] "Request Body" body=""
	I1009 18:23:23.741537   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:23.741848   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:24.241654   34792 type.go:168] "Request Body" body=""
	I1009 18:23:24.241717   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:24.242059   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:24.242125   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:24.741910   34792 type.go:168] "Request Body" body=""
	I1009 18:23:24.741982   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:24.742333   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:25.241063   34792 type.go:168] "Request Body" body=""
	I1009 18:23:25.241128   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:25.241505   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:25.741559   34792 type.go:168] "Request Body" body=""
	I1009 18:23:25.741626   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:25.741933   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:26.241874   34792 type.go:168] "Request Body" body=""
	I1009 18:23:26.241956   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:26.242332   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:26.242390   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:26.741061   34792 type.go:168] "Request Body" body=""
	I1009 18:23:26.741125   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:26.741525   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:27.241264   34792 type.go:168] "Request Body" body=""
	I1009 18:23:27.241334   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:27.241644   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:27.741375   34792 type.go:168] "Request Body" body=""
	I1009 18:23:27.741438   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:27.741748   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:28.241487   34792 type.go:168] "Request Body" body=""
	I1009 18:23:28.241553   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:28.241862   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:28.741699   34792 type.go:168] "Request Body" body=""
	I1009 18:23:28.741767   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:28.742072   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:28.742126   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:29.241949   34792 type.go:168] "Request Body" body=""
	I1009 18:23:29.242051   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:29.242384   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:29.741054   34792 type.go:168] "Request Body" body=""
	I1009 18:23:29.741120   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:29.741440   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:30.241213   34792 type.go:168] "Request Body" body=""
	I1009 18:23:30.241289   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:30.241596   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:30.741484   34792 type.go:168] "Request Body" body=""
	I1009 18:23:30.741560   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:30.741926   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:31.241778   34792 type.go:168] "Request Body" body=""
	I1009 18:23:31.241839   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:31.242174   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:31.242227   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:31.740976   34792 type.go:168] "Request Body" body=""
	I1009 18:23:31.741038   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:31.741384   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:32.241106   34792 type.go:168] "Request Body" body=""
	I1009 18:23:32.241215   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:32.241567   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:32.741260   34792 type.go:168] "Request Body" body=""
	I1009 18:23:32.741352   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:32.741640   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:33.241340   34792 type.go:168] "Request Body" body=""
	I1009 18:23:33.241406   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:33.241743   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:33.741456   34792 type.go:168] "Request Body" body=""
	I1009 18:23:33.741516   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:33.741808   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:33.741862   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:34.241631   34792 type.go:168] "Request Body" body=""
	I1009 18:23:34.241695   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:34.242060   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:34.741908   34792 type.go:168] "Request Body" body=""
	I1009 18:23:34.741974   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:34.742307   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:35.241044   34792 type.go:168] "Request Body" body=""
	I1009 18:23:35.241113   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:35.241458   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:35.741288   34792 type.go:168] "Request Body" body=""
	I1009 18:23:35.741356   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:35.741670   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:36.241422   34792 type.go:168] "Request Body" body=""
	I1009 18:23:36.241483   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:36.241820   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:36.241874   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:36.741640   34792 type.go:168] "Request Body" body=""
	I1009 18:23:36.741707   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:36.742009   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:37.241833   34792 type.go:168] "Request Body" body=""
	I1009 18:23:37.241903   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:37.242258   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:37.740969   34792 type.go:168] "Request Body" body=""
	I1009 18:23:37.741033   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:37.741371   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:38.241096   34792 type.go:168] "Request Body" body=""
	I1009 18:23:38.241188   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:38.241533   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:38.741254   34792 type.go:168] "Request Body" body=""
	I1009 18:23:38.741330   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:38.741616   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:38.741669   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:39.241545   34792 type.go:168] "Request Body" body=""
	I1009 18:23:39.241620   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:39.241961   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:39.741751   34792 type.go:168] "Request Body" body=""
	I1009 18:23:39.741816   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:39.742174   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:40.241991   34792 type.go:168] "Request Body" body=""
	I1009 18:23:40.242060   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:40.242448   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:40.741260   34792 type.go:168] "Request Body" body=""
	I1009 18:23:40.741326   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:40.741641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:40.741695   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:41.241401   34792 type.go:168] "Request Body" body=""
	I1009 18:23:41.241463   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:41.241842   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:41.741321   34792 type.go:168] "Request Body" body=""
	I1009 18:23:41.741396   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:41.741709   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:42.241467   34792 type.go:168] "Request Body" body=""
	I1009 18:23:42.241529   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:42.241897   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:42.741700   34792 type.go:168] "Request Body" body=""
	I1009 18:23:42.741768   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:42.742079   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:42.742160   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:43.241914   34792 type.go:168] "Request Body" body=""
	I1009 18:23:43.241973   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:43.242318   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:43.741093   34792 type.go:168] "Request Body" body=""
	I1009 18:23:43.741186   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:43.741513   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:44.241263   34792 type.go:168] "Request Body" body=""
	I1009 18:23:44.241346   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:44.241690   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:44.741269   34792 type.go:168] "Request Body" body=""
	I1009 18:23:44.741339   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:44.741649   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:45.241373   34792 type.go:168] "Request Body" body=""
	I1009 18:23:45.241435   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:45.241795   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:45.241846   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:45.741727   34792 type.go:168] "Request Body" body=""
	I1009 18:23:45.741791   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:45.742097   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:46.241926   34792 type.go:168] "Request Body" body=""
	I1009 18:23:46.241996   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:46.242356   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:46.741120   34792 type.go:168] "Request Body" body=""
	I1009 18:23:46.741209   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:46.741602   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:47.241322   34792 type.go:168] "Request Body" body=""
	I1009 18:23:47.241391   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:47.241768   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:47.741575   34792 type.go:168] "Request Body" body=""
	I1009 18:23:47.741638   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:47.741939   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:47.741988   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:48.241711   34792 type.go:168] "Request Body" body=""
	I1009 18:23:48.241771   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:48.242111   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:48.741933   34792 type.go:168] "Request Body" body=""
	I1009 18:23:48.742004   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:48.742339   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:49.241046   34792 type.go:168] "Request Body" body=""
	I1009 18:23:49.241123   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:49.241511   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:49.741243   34792 type.go:168] "Request Body" body=""
	I1009 18:23:49.741308   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:49.741638   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:50.241345   34792 type.go:168] "Request Body" body=""
	I1009 18:23:50.241408   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:50.241740   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:50.241790   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:50.741667   34792 type.go:168] "Request Body" body=""
	I1009 18:23:50.741736   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:50.742048   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:51.241420   34792 type.go:168] "Request Body" body=""
	I1009 18:23:51.241491   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:51.241828   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:51.741669   34792 type.go:168] "Request Body" body=""
	I1009 18:23:51.741742   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:51.742050   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:52.241911   34792 type.go:168] "Request Body" body=""
	I1009 18:23:52.241973   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:52.242345   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:52.242396   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:52.741096   34792 type.go:168] "Request Body" body=""
	I1009 18:23:52.741186   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:52.741495   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:53.241277   34792 type.go:168] "Request Body" body=""
	I1009 18:23:53.241348   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:53.241731   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:53.741468   34792 type.go:168] "Request Body" body=""
	I1009 18:23:53.741553   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:53.741866   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:54.241666   34792 type.go:168] "Request Body" body=""
	I1009 18:23:54.241732   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:54.242078   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:54.741932   34792 type.go:168] "Request Body" body=""
	I1009 18:23:54.741997   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:54.742359   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:54.742411   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:54.915717   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:23:54.969064   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:23:54.969123   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:23:54.969226   34792 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 18:23:54.971206   34792 out.go:179] * Enabled addons: 
	I1009 18:23:54.972204   34792 addons.go:514] duration metric: took 1m55.845883827s for enable addons: enabled=[]
	I1009 18:23:55.241550   34792 type.go:168] "Request Body" body=""
	I1009 18:23:55.241625   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:55.241961   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:55.741824   34792 type.go:168] "Request Body" body=""
	I1009 18:23:55.741904   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:55.742290   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:56.241973   34792 type.go:168] "Request Body" body=""
	I1009 18:23:56.242123   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:56.242483   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:56.741036   34792 type.go:168] "Request Body" body=""
	I1009 18:23:56.741152   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:56.741467   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:57.241090   34792 type.go:168] "Request Body" body=""
	I1009 18:23:57.241200   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:57.241560   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:57.241611   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:57.741252   34792 type.go:168] "Request Body" body=""
	I1009 18:23:57.741334   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:57.741629   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:58.241447   34792 type.go:168] "Request Body" body=""
	I1009 18:23:58.241725   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:58.242009   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:58.741244   34792 type.go:168] "Request Body" body=""
	I1009 18:23:58.741314   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:58.741649   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:59.241582   34792 type.go:168] "Request Body" body=""
	I1009 18:23:59.241664   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:59.241976   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:59.242029   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:59.741645   34792 type.go:168] "Request Body" body=""
	I1009 18:23:59.741711   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:59.742016   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:00.241679   34792 type.go:168] "Request Body" body=""
	I1009 18:24:00.241745   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:00.242104   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:00.741941   34792 type.go:168] "Request Body" body=""
	I1009 18:24:00.742015   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:00.742375   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:01.240979   34792 type.go:168] "Request Body" body=""
	I1009 18:24:01.241079   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:01.241446   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:01.741104   34792 type.go:168] "Request Body" body=""
	I1009 18:24:01.741198   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:01.741536   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:01.741587   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:02.241191   34792 type.go:168] "Request Body" body=""
	I1009 18:24:02.241259   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:02.241560   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:02.741155   34792 type.go:168] "Request Body" body=""
	I1009 18:24:02.741230   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:02.741560   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:03.241230   34792 type.go:168] "Request Body" body=""
	I1009 18:24:03.241291   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:03.241606   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:03.741234   34792 type.go:168] "Request Body" body=""
	I1009 18:24:03.741320   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:03.741610   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:03.741659   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:04.241477   34792 type.go:168] "Request Body" body=""
	I1009 18:24:04.241610   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:04.241994   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:04.741666   34792 type.go:168] "Request Body" body=""
	I1009 18:24:04.741733   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:04.742049   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:05.241727   34792 type.go:168] "Request Body" body=""
	I1009 18:24:05.241807   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:05.242113   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:05.741949   34792 type.go:168] "Request Body" body=""
	I1009 18:24:05.742014   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:05.742361   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:05.742412   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:06.240966   34792 type.go:168] "Request Body" body=""
	I1009 18:24:06.241087   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:06.241438   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:06.741043   34792 type.go:168] "Request Body" body=""
	I1009 18:24:06.741125   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:06.741482   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:07.241180   34792 type.go:168] "Request Body" body=""
	I1009 18:24:07.241242   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:07.241557   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:07.741167   34792 type.go:168] "Request Body" body=""
	I1009 18:24:07.741259   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:07.741613   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:08.241236   34792 type.go:168] "Request Body" body=""
	I1009 18:24:08.241302   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:08.241607   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:08.241657   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:08.741270   34792 type.go:168] "Request Body" body=""
	I1009 18:24:08.741337   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:08.741689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:09.241656   34792 type.go:168] "Request Body" body=""
	I1009 18:24:09.241721   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:09.242060   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:09.741758   34792 type.go:168] "Request Body" body=""
	I1009 18:24:09.741832   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:09.742204   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:10.241854   34792 type.go:168] "Request Body" body=""
	I1009 18:24:10.241948   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:10.242297   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:10.242356   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:10.740989   34792 type.go:168] "Request Body" body=""
	I1009 18:24:10.741064   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:10.741405   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:11.242008   34792 type.go:168] "Request Body" body=""
	I1009 18:24:11.242096   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:11.242414   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:11.741019   34792 type.go:168] "Request Body" body=""
	I1009 18:24:11.741090   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:11.741443   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:12.241051   34792 type.go:168] "Request Body" body=""
	I1009 18:24:12.241127   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:12.241488   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:12.741129   34792 type.go:168] "Request Body" body=""
	I1009 18:24:12.741226   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:12.741564   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:12.741614   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:13.241115   34792 type.go:168] "Request Body" body=""
	I1009 18:24:13.241208   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:13.241540   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:13.741171   34792 type.go:168] "Request Body" body=""
	I1009 18:24:13.741235   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:13.741549   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:14.241221   34792 type.go:168] "Request Body" body=""
	I1009 18:24:14.241289   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:14.241613   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:14.741228   34792 type.go:168] "Request Body" body=""
	I1009 18:24:14.741294   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:14.741619   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:14.741670   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:15.241203   34792 type.go:168] "Request Body" body=""
	I1009 18:24:15.241266   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:15.241587   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:15.741480   34792 type.go:168] "Request Body" body=""
	I1009 18:24:15.741544   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:15.741911   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:16.241491   34792 type.go:168] "Request Body" body=""
	I1009 18:24:16.241558   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:16.241870   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:16.741517   34792 type.go:168] "Request Body" body=""
	I1009 18:24:16.741585   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:16.741911   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:16.741963   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:17.241588   34792 type.go:168] "Request Body" body=""
	I1009 18:24:17.241650   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:17.241989   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:17.741644   34792 type.go:168] "Request Body" body=""
	I1009 18:24:17.741710   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:17.742011   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:18.241688   34792 type.go:168] "Request Body" body=""
	I1009 18:24:18.241755   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:18.242125   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:18.741790   34792 type.go:168] "Request Body" body=""
	I1009 18:24:18.741854   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:18.742223   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:18.742290   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:19.242039   34792 type.go:168] "Request Body" body=""
	I1009 18:24:19.242109   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:19.242472   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:19.741076   34792 type.go:168] "Request Body" body=""
	I1009 18:24:19.741162   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:19.741541   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:20.241117   34792 type.go:168] "Request Body" body=""
	I1009 18:24:20.241204   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:20.241525   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:20.741486   34792 type.go:168] "Request Body" body=""
	I1009 18:24:20.741556   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:20.741868   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:21.241426   34792 type.go:168] "Request Body" body=""
	I1009 18:24:21.241498   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:21.241806   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:21.241862   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:21.741431   34792 type.go:168] "Request Body" body=""
	I1009 18:24:21.741537   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:21.741868   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:22.241461   34792 type.go:168] "Request Body" body=""
	I1009 18:24:22.241535   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:22.241849   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:22.741438   34792 type.go:168] "Request Body" body=""
	I1009 18:24:22.741501   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:22.741846   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:23.241408   34792 type.go:168] "Request Body" body=""
	I1009 18:24:23.241477   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:23.241783   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:23.741400   34792 type.go:168] "Request Body" body=""
	I1009 18:24:23.741470   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:23.741789   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:23.741845   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:24.241359   34792 type.go:168] "Request Body" body=""
	I1009 18:24:24.241431   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:24.241755   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:24.741348   34792 type.go:168] "Request Body" body=""
	I1009 18:24:24.741408   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:24.741733   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:25.241293   34792 type.go:168] "Request Body" body=""
	I1009 18:24:25.241374   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:25.241694   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:25.741621   34792 type.go:168] "Request Body" body=""
	I1009 18:24:25.741682   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:25.742037   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:25.742088   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:26.241707   34792 type.go:168] "Request Body" body=""
	I1009 18:24:26.241774   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:26.242098   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:26.741808   34792 type.go:168] "Request Body" body=""
	I1009 18:24:26.741871   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:26.742236   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:27.241893   34792 type.go:168] "Request Body" body=""
	I1009 18:24:27.241957   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:27.242307   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:27.741971   34792 type.go:168] "Request Body" body=""
	I1009 18:24:27.742039   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:27.742363   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:27.742412   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:28.240944   34792 type.go:168] "Request Body" body=""
	I1009 18:24:28.241012   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:28.241383   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:28.740967   34792 type.go:168] "Request Body" body=""
	I1009 18:24:28.741047   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:28.741411   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:29.241219   34792 type.go:168] "Request Body" body=""
	I1009 18:24:29.241290   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:29.241653   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:29.741274   34792 type.go:168] "Request Body" body=""
	I1009 18:24:29.741345   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:29.741655   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:30.241249   34792 type.go:168] "Request Body" body=""
	I1009 18:24:30.241326   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:30.241636   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:30.241689   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:30.741565   34792 type.go:168] "Request Body" body=""
	I1009 18:24:30.741637   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:30.741952   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:31.241609   34792 type.go:168] "Request Body" body=""
	I1009 18:24:31.241669   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:31.242013   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:31.741661   34792 type.go:168] "Request Body" body=""
	I1009 18:24:31.741727   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:31.742040   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:32.241675   34792 type.go:168] "Request Body" body=""
	I1009 18:24:32.241739   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:32.242047   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:32.242100   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:32.741353   34792 type.go:168] "Request Body" body=""
	I1009 18:24:32.741425   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:32.741746   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:33.241341   34792 type.go:168] "Request Body" body=""
	I1009 18:24:33.241401   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:33.241718   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:33.741321   34792 type.go:168] "Request Body" body=""
	I1009 18:24:33.741388   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:33.741692   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:34.241262   34792 type.go:168] "Request Body" body=""
	I1009 18:24:34.241326   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:34.241641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:34.741266   34792 type.go:168] "Request Body" body=""
	I1009 18:24:34.741339   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:34.741686   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:34.741740   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:35.241256   34792 type.go:168] "Request Body" body=""
	I1009 18:24:35.241332   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:35.241644   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:35.741557   34792 type.go:168] "Request Body" body=""
	I1009 18:24:35.741623   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:35.741960   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:36.241631   34792 type.go:168] "Request Body" body=""
	I1009 18:24:36.241698   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:36.242094   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:36.741738   34792 type.go:168] "Request Body" body=""
	I1009 18:24:36.741810   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:36.742164   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:36.742232   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:37.241811   34792 type.go:168] "Request Body" body=""
	I1009 18:24:37.241879   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:37.242219   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:37.741906   34792 type.go:168] "Request Body" body=""
	I1009 18:24:37.741972   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:37.742360   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:38.241974   34792 type.go:168] "Request Body" body=""
	I1009 18:24:38.242032   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:38.242406   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:38.740970   34792 type.go:168] "Request Body" body=""
	I1009 18:24:38.741038   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:38.741400   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:39.241238   34792 type.go:168] "Request Body" body=""
	I1009 18:24:39.241302   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:39.241642   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:39.241695   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:39.741304   34792 type.go:168] "Request Body" body=""
	I1009 18:24:39.741370   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:39.741689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:40.241283   34792 type.go:168] "Request Body" body=""
	I1009 18:24:40.241349   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:40.241689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:40.741596   34792 type.go:168] "Request Body" body=""
	I1009 18:24:40.741665   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:40.741992   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:41.241775   34792 type.go:168] "Request Body" body=""
	I1009 18:24:41.241853   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:41.242210   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:41.242282   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:41.741904   34792 type.go:168] "Request Body" body=""
	I1009 18:24:41.741970   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:41.742352   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:42.240959   34792 type.go:168] "Request Body" body=""
	I1009 18:24:42.241085   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:42.241411   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:42.741000   34792 type.go:168] "Request Body" body=""
	I1009 18:24:42.741063   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:42.741398   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:43.242037   34792 type.go:168] "Request Body" body=""
	I1009 18:24:43.242129   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:43.242476   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:43.242528   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:43.741058   34792 type.go:168] "Request Body" body=""
	I1009 18:24:43.741124   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:43.741463   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:44.241058   34792 type.go:168] "Request Body" body=""
	I1009 18:24:44.241159   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:44.241499   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:44.741068   34792 type.go:168] "Request Body" body=""
	I1009 18:24:44.741159   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:44.741472   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:45.241073   34792 type.go:168] "Request Body" body=""
	I1009 18:24:45.241155   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:45.241482   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:45.741464   34792 type.go:168] "Request Body" body=""
	I1009 18:24:45.741533   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:45.741834   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:45.741888   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:46.241484   34792 type.go:168] "Request Body" body=""
	I1009 18:24:46.241552   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:46.241885   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:46.741462   34792 type.go:168] "Request Body" body=""
	I1009 18:24:46.741538   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:46.741838   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:47.241422   34792 type.go:168] "Request Body" body=""
	I1009 18:24:47.241483   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:47.241808   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:47.741360   34792 type.go:168] "Request Body" body=""
	I1009 18:24:47.741425   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:47.741734   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:48.241415   34792 type.go:168] "Request Body" body=""
	I1009 18:24:48.241480   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:48.241802   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:48.241867   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:48.741335   34792 type.go:168] "Request Body" body=""
	I1009 18:24:48.741399   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:48.741718   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:49.241753   34792 type.go:168] "Request Body" body=""
	I1009 18:24:49.241820   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:49.242187   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:49.741848   34792 type.go:168] "Request Body" body=""
	I1009 18:24:49.741914   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:49.742284   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:50.242049   34792 type.go:168] "Request Body" body=""
	I1009 18:24:50.242115   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:50.242449   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:50.242500   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:50.741086   34792 type.go:168] "Request Body" body=""
	I1009 18:24:50.741198   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:50.741527   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:51.241098   34792 type.go:168] "Request Body" body=""
	I1009 18:24:51.241186   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:51.241495   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:51.741082   34792 type.go:168] "Request Body" body=""
	I1009 18:24:51.741183   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:51.741522   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:52.241121   34792 type.go:168] "Request Body" body=""
	I1009 18:24:52.241212   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:52.241508   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:52.741094   34792 type.go:168] "Request Body" body=""
	I1009 18:24:52.741203   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:52.741514   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:52.741572   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:53.241090   34792 type.go:168] "Request Body" body=""
	I1009 18:24:53.241183   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:53.241580   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:53.741218   34792 type.go:168] "Request Body" body=""
	I1009 18:24:53.741300   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:53.741630   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:54.241270   34792 type.go:168] "Request Body" body=""
	I1009 18:24:54.241352   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:54.241658   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:54.741241   34792 type.go:168] "Request Body" body=""
	I1009 18:24:54.741321   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:54.741636   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:54.741687   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:55.241234   34792 type.go:168] "Request Body" body=""
	I1009 18:24:55.241306   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:55.241626   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:55.741410   34792 type.go:168] "Request Body" body=""
	I1009 18:24:55.741479   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:55.741852   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:56.241427   34792 type.go:168] "Request Body" body=""
	I1009 18:24:56.241491   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:56.241834   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:56.741423   34792 type.go:168] "Request Body" body=""
	I1009 18:24:56.741492   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:56.741854   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:56.741921   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:57.241419   34792 type.go:168] "Request Body" body=""
	I1009 18:24:57.241484   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:57.241784   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:57.741337   34792 type.go:168] "Request Body" body=""
	I1009 18:24:57.741402   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:57.741768   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:58.241353   34792 type.go:168] "Request Body" body=""
	I1009 18:24:58.241420   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:58.241723   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:58.741285   34792 type.go:168] "Request Body" body=""
	I1009 18:24:58.741356   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:58.741698   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:59.241536   34792 type.go:168] "Request Body" body=""
	I1009 18:24:59.241601   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:59.241906   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:59.241970   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:59.741466   34792 type.go:168] "Request Body" body=""
	I1009 18:24:59.741528   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:59.741866   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:00.241421   34792 type.go:168] "Request Body" body=""
	I1009 18:25:00.241487   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:00.241800   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:00.741667   34792 type.go:168] "Request Body" body=""
	I1009 18:25:00.741748   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:00.742076   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:01.241775   34792 type.go:168] "Request Body" body=""
	I1009 18:25:01.241841   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:01.242226   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:01.242284   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:01.741879   34792 type.go:168] "Request Body" body=""
	I1009 18:25:01.741957   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:01.742330   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:02.241978   34792 type.go:168] "Request Body" body=""
	I1009 18:25:02.242041   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:02.242423   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:02.741029   34792 type.go:168] "Request Body" body=""
	I1009 18:25:02.741115   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:02.741462   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:03.241086   34792 type.go:168] "Request Body" body=""
	I1009 18:25:03.241179   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:03.241501   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:03.741018   34792 type.go:168] "Request Body" body=""
	I1009 18:25:03.741114   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:03.741476   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:03.741528   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:04.241053   34792 type.go:168] "Request Body" body=""
	I1009 18:25:04.241116   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:04.241452   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:04.741007   34792 type.go:168] "Request Body" body=""
	I1009 18:25:04.741083   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:04.741445   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:05.241037   34792 type.go:168] "Request Body" body=""
	I1009 18:25:05.241100   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:05.241427   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:05.741247   34792 type.go:168] "Request Body" body=""
	I1009 18:25:05.741321   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:05.741697   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:05.741771   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:06.241254   34792 type.go:168] "Request Body" body=""
	I1009 18:25:06.241327   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:06.241639   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:06.741286   34792 type.go:168] "Request Body" body=""
	I1009 18:25:06.741366   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:06.741735   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:07.241253   34792 type.go:168] "Request Body" body=""
	I1009 18:25:07.241322   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:07.241625   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:07.741217   34792 type.go:168] "Request Body" body=""
	I1009 18:25:07.741279   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:07.741640   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:08.241244   34792 type.go:168] "Request Body" body=""
	I1009 18:25:08.241315   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:08.241647   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:08.241711   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:08.741241   34792 type.go:168] "Request Body" body=""
	I1009 18:25:08.741304   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:08.741686   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:09.241716   34792 type.go:168] "Request Body" body=""
	I1009 18:25:09.241782   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:09.242124   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:09.741814   34792 type.go:168] "Request Body" body=""
	I1009 18:25:09.741880   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:09.742241   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:10.241918   34792 type.go:168] "Request Body" body=""
	I1009 18:25:10.241983   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:10.242339   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:10.242405   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:10.741070   34792 type.go:168] "Request Body" body=""
	I1009 18:25:10.741194   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:10.741554   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:11.241213   34792 type.go:168] "Request Body" body=""
	I1009 18:25:11.241281   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:11.241588   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:11.741236   34792 type.go:168] "Request Body" body=""
	I1009 18:25:11.741322   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:11.741656   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:12.241283   34792 type.go:168] "Request Body" body=""
	I1009 18:25:12.241345   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:12.241648   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:12.741253   34792 type.go:168] "Request Body" body=""
	I1009 18:25:12.741341   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:12.741670   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:12.741727   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:13.241274   34792 type.go:168] "Request Body" body=""
	I1009 18:25:13.241352   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:13.241660   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:13.741258   34792 type.go:168] "Request Body" body=""
	I1009 18:25:13.741346   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:13.741679   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:14.241260   34792 type.go:168] "Request Body" body=""
	I1009 18:25:14.241333   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:14.241686   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:14.741277   34792 type.go:168] "Request Body" body=""
	I1009 18:25:14.741354   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:14.741682   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:15.241247   34792 type.go:168] "Request Body" body=""
	I1009 18:25:15.241309   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:15.241612   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:15.241669   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:15.741488   34792 type.go:168] "Request Body" body=""
	I1009 18:25:15.741552   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:15.741890   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:16.241468   34792 type.go:168] "Request Body" body=""
	I1009 18:25:16.241537   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:16.241842   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:16.741415   34792 type.go:168] "Request Body" body=""
	I1009 18:25:16.741480   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:16.741850   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:17.241442   34792 type.go:168] "Request Body" body=""
	I1009 18:25:17.241504   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:17.241800   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:17.241861   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:17.741344   34792 type.go:168] "Request Body" body=""
	I1009 18:25:17.741411   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:17.741764   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:18.241362   34792 type.go:168] "Request Body" body=""
	I1009 18:25:18.241432   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:18.241786   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:18.741325   34792 type.go:168] "Request Body" body=""
	I1009 18:25:18.741390   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:18.741723   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:19.241633   34792 type.go:168] "Request Body" body=""
	I1009 18:25:19.241702   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:19.242011   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:19.242081   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:19.741669   34792 type.go:168] "Request Body" body=""
	I1009 18:25:19.741733   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:19.742064   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:20.241763   34792 type.go:168] "Request Body" body=""
	I1009 18:25:20.241826   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:20.242186   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:20.742053   34792 type.go:168] "Request Body" body=""
	I1009 18:25:20.742131   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:20.742513   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:21.241071   34792 type.go:168] "Request Body" body=""
	I1009 18:25:21.241171   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:21.241504   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:21.741088   34792 type.go:168] "Request Body" body=""
	I1009 18:25:21.741207   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:21.741536   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:21.741594   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:22.241126   34792 type.go:168] "Request Body" body=""
	I1009 18:25:22.241221   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:22.241545   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:22.741131   34792 type.go:168] "Request Body" body=""
	I1009 18:25:22.741233   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:22.741588   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:23.241178   34792 type.go:168] "Request Body" body=""
	I1009 18:25:23.241242   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:23.241568   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:23.741162   34792 type.go:168] "Request Body" body=""
	I1009 18:25:23.741242   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:23.741577   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:23.741627   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:24.241178   34792 type.go:168] "Request Body" body=""
	I1009 18:25:24.241246   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:24.241578   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:24.741188   34792 type.go:168] "Request Body" body=""
	I1009 18:25:24.741295   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:24.741619   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:25.241208   34792 type.go:168] "Request Body" body=""
	I1009 18:25:25.241275   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:25.241641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:25.741538   34792 type.go:168] "Request Body" body=""
	I1009 18:25:25.741597   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:25.741905   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:25.741979   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:26.241464   34792 type.go:168] "Request Body" body=""
	I1009 18:25:26.241527   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:26.241835   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:26.741401   34792 type.go:168] "Request Body" body=""
	I1009 18:25:26.741467   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:26.741780   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:27.241351   34792 type.go:168] "Request Body" body=""
	I1009 18:25:27.241416   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:27.241723   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:27.741308   34792 type.go:168] "Request Body" body=""
	I1009 18:25:27.741383   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:27.741695   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:28.241262   34792 type.go:168] "Request Body" body=""
	I1009 18:25:28.241331   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:28.241634   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:28.241696   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:28.741253   34792 type.go:168] "Request Body" body=""
	I1009 18:25:28.741315   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:28.741626   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:29.241574   34792 type.go:168] "Request Body" body=""
	I1009 18:25:29.241643   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:29.241986   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:29.741657   34792 type.go:168] "Request Body" body=""
	I1009 18:25:29.741719   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:29.742063   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:30.241739   34792 type.go:168] "Request Body" body=""
	I1009 18:25:30.241804   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:30.242168   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:30.242230   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:30.741968   34792 type.go:168] "Request Body" body=""
	I1009 18:25:30.742100   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:30.742470   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:31.241076   34792 type.go:168] "Request Body" body=""
	I1009 18:25:31.241171   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:31.241532   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:31.741177   34792 type.go:168] "Request Body" body=""
	I1009 18:25:31.741282   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:31.741624   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:32.241262   34792 type.go:168] "Request Body" body=""
	I1009 18:25:32.241340   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:32.241670   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:32.741275   34792 type.go:168] "Request Body" body=""
	I1009 18:25:32.741360   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:32.741742   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:32.741796   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:33.241329   34792 type.go:168] "Request Body" body=""
	I1009 18:25:33.241396   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:33.241697   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:33.741289   34792 type.go:168] "Request Body" body=""
	I1009 18:25:33.741384   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:33.741759   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:34.241368   34792 type.go:168] "Request Body" body=""
	I1009 18:25:34.241439   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:34.241760   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:34.741351   34792 type.go:168] "Request Body" body=""
	I1009 18:25:34.741428   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:34.741798   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:34.741864   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:35.241399   34792 type.go:168] "Request Body" body=""
	I1009 18:25:35.241491   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:35.241838   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:35.741772   34792 type.go:168] "Request Body" body=""
	I1009 18:25:35.741836   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:35.742224   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:36.242003   34792 type.go:168] "Request Body" body=""
	I1009 18:25:36.242076   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:36.242435   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:36.741028   34792 type.go:168] "Request Body" body=""
	I1009 18:25:36.741097   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:36.741464   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:37.241121   34792 type.go:168] "Request Body" body=""
	I1009 18:25:37.241212   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:37.241551   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:37.241620   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:37.741109   34792 type.go:168] "Request Body" body=""
	I1009 18:25:37.741219   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:37.741567   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:38.241177   34792 type.go:168] "Request Body" body=""
	I1009 18:25:38.241246   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:38.241629   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:38.741262   34792 type.go:168] "Request Body" body=""
	I1009 18:25:38.741325   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:38.741654   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:39.241652   34792 type.go:168] "Request Body" body=""
	I1009 18:25:39.241726   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:39.242067   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:39.242125   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:39.741736   34792 type.go:168] "Request Body" body=""
	I1009 18:25:39.741806   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:39.742215   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:40.241891   34792 type.go:168] "Request Body" body=""
	I1009 18:25:40.241956   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:40.242334   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:40.741050   34792 type.go:168] "Request Body" body=""
	I1009 18:25:40.741121   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:40.741479   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:41.241091   34792 type.go:168] "Request Body" body=""
	I1009 18:25:41.241192   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:41.241525   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:41.741118   34792 type.go:168] "Request Body" body=""
	I1009 18:25:41.741208   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:41.741569   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:41.741626   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:42.241220   34792 type.go:168] "Request Body" body=""
	I1009 18:25:42.241296   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:42.241609   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:42.741251   34792 type.go:168] "Request Body" body=""
	I1009 18:25:42.741318   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:42.741643   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:43.241341   34792 type.go:168] "Request Body" body=""
	I1009 18:25:43.241412   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:43.241736   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:43.741353   34792 type.go:168] "Request Body" body=""
	I1009 18:25:43.741418   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:43.741732   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:43.741785   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:44.241361   34792 type.go:168] "Request Body" body=""
	I1009 18:25:44.241434   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:44.241757   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:44.741332   34792 type.go:168] "Request Body" body=""
	I1009 18:25:44.741401   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:44.741760   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:45.241363   34792 type.go:168] "Request Body" body=""
	I1009 18:25:45.241438   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:45.241819   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:45.741752   34792 type.go:168] "Request Body" body=""
	I1009 18:25:45.741826   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:45.742224   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:45.742282   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:46.241931   34792 type.go:168] "Request Body" body=""
	I1009 18:25:46.242008   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:46.242395   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:46.740984   34792 type.go:168] "Request Body" body=""
	I1009 18:25:46.741081   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:46.741473   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:47.241027   34792 type.go:168] "Request Body" body=""
	I1009 18:25:47.241148   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:47.241536   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:47.741035   34792 type.go:168] "Request Body" body=""
	I1009 18:25:47.741101   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:47.741554   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:48.241082   34792 type.go:168] "Request Body" body=""
	I1009 18:25:48.241179   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:48.241496   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:48.241548   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:48.741082   34792 type.go:168] "Request Body" body=""
	I1009 18:25:48.741203   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:48.741562   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:49.241540   34792 type.go:168] "Request Body" body=""
	I1009 18:25:49.241609   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:49.241992   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:49.741668   34792 type.go:168] "Request Body" body=""
	I1009 18:25:49.741737   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:49.742062   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:50.241713   34792 type.go:168] "Request Body" body=""
	I1009 18:25:50.241779   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:50.242089   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:50.242165   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:50.741969   34792 type.go:168] "Request Body" body=""
	I1009 18:25:50.742080   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:50.742425   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:51.241055   34792 type.go:168] "Request Body" body=""
	I1009 18:25:51.241121   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:51.241485   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:51.741082   34792 type.go:168] "Request Body" body=""
	I1009 18:25:51.741170   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:51.741493   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:52.241115   34792 type.go:168] "Request Body" body=""
	I1009 18:25:52.241209   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:52.241541   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:52.741234   34792 type.go:168] "Request Body" body=""
	I1009 18:25:52.741307   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:52.741661   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:52.741713   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:53.241239   34792 type.go:168] "Request Body" body=""
	I1009 18:25:53.241326   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:53.241653   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:53.741250   34792 type.go:168] "Request Body" body=""
	I1009 18:25:53.741330   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:53.741655   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:54.241252   34792 type.go:168] "Request Body" body=""
	I1009 18:25:54.241357   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:54.241717   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:54.741298   34792 type.go:168] "Request Body" body=""
	I1009 18:25:54.741362   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:54.741680   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:54.741732   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:55.241249   34792 type.go:168] "Request Body" body=""
	I1009 18:25:55.241310   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:55.241707   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:55.741639   34792 type.go:168] "Request Body" body=""
	I1009 18:25:55.741703   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:55.742036   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:56.241666   34792 type.go:168] "Request Body" body=""
	I1009 18:25:56.241729   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:56.242065   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:56.741838   34792 type.go:168] "Request Body" body=""
	I1009 18:25:56.741901   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:56.742249   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:56.742310   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:57.241936   34792 type.go:168] "Request Body" body=""
	I1009 18:25:57.242047   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:57.242403   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:57.741073   34792 type.go:168] "Request Body" body=""
	I1009 18:25:57.741156   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:57.741453   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:58.241102   34792 type.go:168] "Request Body" body=""
	I1009 18:25:58.241189   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:58.241532   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:58.741625   34792 type.go:168] "Request Body" body=""
	I1009 18:25:58.741731   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:58.742069   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:59.241918   34792 type.go:168] "Request Body" body=""
	I1009 18:25:59.242002   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:59.242382   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:59.242433   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:59.741586   34792 type.go:168] "Request Body" body=""
	I1009 18:25:59.741680   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:59.742047   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:00.241712   34792 type.go:168] "Request Body" body=""
	I1009 18:26:00.241778   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:00.242123   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:00.741944   34792 type.go:168] "Request Body" body=""
	I1009 18:26:00.742006   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:00.742335   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:01.241998   34792 type.go:168] "Request Body" body=""
	I1009 18:26:01.242063   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:01.242409   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:01.242463   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:01.740980   34792 type.go:168] "Request Body" body=""
	I1009 18:26:01.741043   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:01.741380   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:02.240968   34792 type.go:168] "Request Body" body=""
	I1009 18:26:02.241034   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:02.241387   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:02.740965   34792 type.go:168] "Request Body" body=""
	I1009 18:26:02.741036   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:02.741361   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:03.241979   34792 type.go:168] "Request Body" body=""
	I1009 18:26:03.242041   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:03.242370   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:03.740968   34792 type.go:168] "Request Body" body=""
	I1009 18:26:03.741033   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:03.741362   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:03.741412   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:04.242040   34792 type.go:168] "Request Body" body=""
	I1009 18:26:04.242108   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:04.242468   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:04.741070   34792 type.go:168] "Request Body" body=""
	I1009 18:26:04.741158   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:04.741484   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:05.241044   34792 type.go:168] "Request Body" body=""
	I1009 18:26:05.241107   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:05.241461   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:05.741242   34792 type.go:168] "Request Body" body=""
	I1009 18:26:05.741305   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:05.741627   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:05.741678   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:06.241201   34792 type.go:168] "Request Body" body=""
	I1009 18:26:06.241271   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:06.241594   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:06.741216   34792 type.go:168] "Request Body" body=""
	I1009 18:26:06.741302   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:06.741638   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:07.241228   34792 type.go:168] "Request Body" body=""
	I1009 18:26:07.241309   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:07.241642   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:07.741295   34792 type.go:168] "Request Body" body=""
	I1009 18:26:07.741364   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:07.741662   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:07.741715   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:08.241237   34792 type.go:168] "Request Body" body=""
	I1009 18:26:08.241302   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:08.241600   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:08.741196   34792 type.go:168] "Request Body" body=""
	I1009 18:26:08.741257   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:08.741600   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:09.241564   34792 type.go:168] "Request Body" body=""
	I1009 18:26:09.241629   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:09.241949   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:09.741615   34792 type.go:168] "Request Body" body=""
	I1009 18:26:09.741680   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:09.741985   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:09.742040   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:10.241636   34792 type.go:168] "Request Body" body=""
	I1009 18:26:10.241706   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:10.242002   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:10.741894   34792 type.go:168] "Request Body" body=""
	I1009 18:26:10.741959   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:10.742285   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:11.241928   34792 type.go:168] "Request Body" body=""
	I1009 18:26:11.241997   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:11.242350   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:11.742032   34792 type.go:168] "Request Body" body=""
	I1009 18:26:11.742100   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:11.742451   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:11.742508   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:12.241054   34792 type.go:168] "Request Body" body=""
	I1009 18:26:12.241123   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:12.241536   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:12.741176   34792 type.go:168] "Request Body" body=""
	I1009 18:26:12.741242   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:12.741599   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:13.241179   34792 type.go:168] "Request Body" body=""
	I1009 18:26:13.241237   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:13.241552   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:13.741164   34792 type.go:168] "Request Body" body=""
	I1009 18:26:13.741229   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:13.741597   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:14.241174   34792 type.go:168] "Request Body" body=""
	I1009 18:26:14.241246   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:14.241576   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:14.241632   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:14.741184   34792 type.go:168] "Request Body" body=""
	I1009 18:26:14.741250   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:14.741553   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:15.241116   34792 type.go:168] "Request Body" body=""
	I1009 18:26:15.241224   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:15.241537   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:15.741317   34792 type.go:168] "Request Body" body=""
	I1009 18:26:15.741389   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:15.741689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:16.241241   34792 type.go:168] "Request Body" body=""
	I1009 18:26:16.241305   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:16.241632   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:16.241683   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:16.741260   34792 type.go:168] "Request Body" body=""
	I1009 18:26:16.741325   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:16.741630   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:17.241224   34792 type.go:168] "Request Body" body=""
	I1009 18:26:17.241286   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:17.241599   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:17.741225   34792 type.go:168] "Request Body" body=""
	I1009 18:26:17.741291   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:17.741594   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:18.241198   34792 type.go:168] "Request Body" body=""
	I1009 18:26:18.241264   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:18.241577   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:18.741185   34792 type.go:168] "Request Body" body=""
	I1009 18:26:18.741257   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:18.741577   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:18.741626   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:19.241353   34792 type.go:168] "Request Body" body=""
	I1009 18:26:19.241426   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:19.241744   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:19.741299   34792 type.go:168] "Request Body" body=""
	I1009 18:26:19.741364   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:19.741663   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:20.241246   34792 type.go:168] "Request Body" body=""
	I1009 18:26:20.241316   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:20.241629   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:20.741541   34792 type.go:168] "Request Body" body=""
	I1009 18:26:20.741607   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:20.741914   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:20.741966   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:21.241518   34792 type.go:168] "Request Body" body=""
	I1009 18:26:21.241583   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:21.241885   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:21.741448   34792 type.go:168] "Request Body" body=""
	I1009 18:26:21.741515   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:21.741816   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:22.241407   34792 type.go:168] "Request Body" body=""
	I1009 18:26:22.241471   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:22.241770   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:22.741331   34792 type.go:168] "Request Body" body=""
	I1009 18:26:22.741400   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:22.741698   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:23.241258   34792 type.go:168] "Request Body" body=""
	I1009 18:26:23.241325   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:23.241638   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:23.241693   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:23.741220   34792 type.go:168] "Request Body" body=""
	I1009 18:26:23.741300   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:23.741602   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:24.241221   34792 type.go:168] "Request Body" body=""
	I1009 18:26:24.241295   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:24.241598   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:24.741133   34792 type.go:168] "Request Body" body=""
	I1009 18:26:24.741216   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:24.741539   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:25.241114   34792 type.go:168] "Request Body" body=""
	I1009 18:26:25.241213   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:25.241546   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:25.741511   34792 type.go:168] "Request Body" body=""
	I1009 18:26:25.741576   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:25.741865   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:25.741922   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:26.241516   34792 type.go:168] "Request Body" body=""
	I1009 18:26:26.241579   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:26.241882   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:26.741449   34792 type.go:168] "Request Body" body=""
	I1009 18:26:26.741511   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:26.741816   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:27.241391   34792 type.go:168] "Request Body" body=""
	I1009 18:26:27.241460   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:27.241802   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:27.741394   34792 type.go:168] "Request Body" body=""
	I1009 18:26:27.741461   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:27.741756   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:28.241317   34792 type.go:168] "Request Body" body=""
	I1009 18:26:28.241388   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:28.241721   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:28.241777   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:28.741288   34792 type.go:168] "Request Body" body=""
	I1009 18:26:28.741355   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:28.741648   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:29.241543   34792 type.go:168] "Request Body" body=""
	I1009 18:26:29.241610   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:29.241914   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:29.741477   34792 type.go:168] "Request Body" body=""
	I1009 18:26:29.741542   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:29.741838   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:30.241416   34792 type.go:168] "Request Body" body=""
	I1009 18:26:30.241476   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:30.241809   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:30.241861   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:30.741676   34792 type.go:168] "Request Body" body=""
	I1009 18:26:30.741745   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:30.742049   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:31.241791   34792 type.go:168] "Request Body" body=""
	I1009 18:26:31.241858   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:31.242183   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:31.741839   34792 type.go:168] "Request Body" body=""
	I1009 18:26:31.741908   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:31.742213   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:32.241895   34792 type.go:168] "Request Body" body=""
	I1009 18:26:32.241956   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:32.242308   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:32.242358   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:32.741973   34792 type.go:168] "Request Body" body=""
	I1009 18:26:32.742037   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:32.742358   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:33.241033   34792 type.go:168] "Request Body" body=""
	I1009 18:26:33.241095   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:33.241444   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:33.741092   34792 type.go:168] "Request Body" body=""
	I1009 18:26:33.741183   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:33.741483   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:34.241043   34792 type.go:168] "Request Body" body=""
	I1009 18:26:34.241106   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:34.241473   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:34.741040   34792 type.go:168] "Request Body" body=""
	I1009 18:26:34.741103   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:34.741434   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:34.741487   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:35.241090   34792 type.go:168] "Request Body" body=""
	I1009 18:26:35.241193   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:35.241503   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:35.741438   34792 type.go:168] "Request Body" body=""
	I1009 18:26:35.741506   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:35.741812   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:36.241366   34792 type.go:168] "Request Body" body=""
	I1009 18:26:36.241429   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:36.241735   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:36.741315   34792 type.go:168] "Request Body" body=""
	I1009 18:26:36.741379   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:36.741698   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:36.741752   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:37.241310   34792 type.go:168] "Request Body" body=""
	I1009 18:26:37.241385   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:37.241689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:37.741251   34792 type.go:168] "Request Body" body=""
	I1009 18:26:37.741329   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:37.741650   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:38.241235   34792 type.go:168] "Request Body" body=""
	I1009 18:26:38.241299   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:38.241604   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:38.741249   34792 type.go:168] "Request Body" body=""
	I1009 18:26:38.741311   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:38.741610   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:39.241542   34792 type.go:168] "Request Body" body=""
	I1009 18:26:39.241604   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:39.241903   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:39.241956   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:39.741468   34792 type.go:168] "Request Body" body=""
	I1009 18:26:39.741530   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:39.741834   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:40.241427   34792 type.go:168] "Request Body" body=""
	I1009 18:26:40.241499   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:40.241835   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:40.741723   34792 type.go:168] "Request Body" body=""
	I1009 18:26:40.741789   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:40.742120   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:41.241751   34792 type.go:168] "Request Body" body=""
	I1009 18:26:41.241818   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:41.242203   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:41.242264   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:41.741856   34792 type.go:168] "Request Body" body=""
	I1009 18:26:41.741921   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:41.742256   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:42.241895   34792 type.go:168] "Request Body" body=""
	I1009 18:26:42.241958   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:42.242315   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:42.741994   34792 type.go:168] "Request Body" body=""
	I1009 18:26:42.742065   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:42.742389   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:43.240973   34792 type.go:168] "Request Body" body=""
	I1009 18:26:43.241061   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:43.241393   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:43.740990   34792 type.go:168] "Request Body" body=""
	I1009 18:26:43.741062   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:43.741419   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:43.741468   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:44.241000   34792 type.go:168] "Request Body" body=""
	I1009 18:26:44.241064   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:44.241416   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:44.740980   34792 type.go:168] "Request Body" body=""
	I1009 18:26:44.741068   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:44.741391   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:45.241003   34792 type.go:168] "Request Body" body=""
	I1009 18:26:45.241071   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:45.241415   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:45.741236   34792 type.go:168] "Request Body" body=""
	I1009 18:26:45.741300   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:45.741605   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:45.741660   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:46.241187   34792 type.go:168] "Request Body" body=""
	I1009 18:26:46.241257   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:46.241559   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:46.741123   34792 type.go:168] "Request Body" body=""
	I1009 18:26:46.741200   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:46.741513   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:47.241090   34792 type.go:168] "Request Body" body=""
	I1009 18:26:47.241182   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:47.241488   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:47.741079   34792 type.go:168] "Request Body" body=""
	I1009 18:26:47.741166   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:47.741472   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:48.241093   34792 type.go:168] "Request Body" body=""
	I1009 18:26:48.241186   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:48.241592   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:48.241645   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:48.741196   34792 type.go:168] "Request Body" body=""
	I1009 18:26:48.741263   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:48.741567   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:49.241340   34792 type.go:168] "Request Body" body=""
	I1009 18:26:49.241413   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:49.241715   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:49.741320   34792 type.go:168] "Request Body" body=""
	I1009 18:26:49.741390   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:49.741693   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:50.241274   34792 type.go:168] "Request Body" body=""
	I1009 18:26:50.241356   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:50.241686   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:50.241739   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:50.741604   34792 type.go:168] "Request Body" body=""
	I1009 18:26:50.741672   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:50.741979   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:51.241631   34792 type.go:168] "Request Body" body=""
	I1009 18:26:51.241697   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:51.242059   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:51.741717   34792 type.go:168] "Request Body" body=""
	I1009 18:26:51.741781   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:51.742121   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:52.241772   34792 type.go:168] "Request Body" body=""
	I1009 18:26:52.241840   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:52.242193   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:52.242249   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:52.741892   34792 type.go:168] "Request Body" body=""
	I1009 18:26:52.741970   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:52.742329   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:53.241997   34792 type.go:168] "Request Body" body=""
	I1009 18:26:53.242075   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:53.242417   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:53.741024   34792 type.go:168] "Request Body" body=""
	I1009 18:26:53.741093   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:53.741440   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:54.241044   34792 type.go:168] "Request Body" body=""
	I1009 18:26:54.241125   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:54.241492   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:54.741067   34792 type.go:168] "Request Body" body=""
	I1009 18:26:54.741161   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:54.741529   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:54.741583   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:55.241129   34792 type.go:168] "Request Body" body=""
	I1009 18:26:55.241221   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:55.241609   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:55.741431   34792 type.go:168] "Request Body" body=""
	I1009 18:26:55.741496   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:55.741812   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:56.241424   34792 type.go:168] "Request Body" body=""
	I1009 18:26:56.241490   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:56.241796   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:56.741393   34792 type.go:168] "Request Body" body=""
	I1009 18:26:56.741462   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:56.741773   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:56.741826   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:57.241378   34792 type.go:168] "Request Body" body=""
	I1009 18:26:57.241453   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:57.241771   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:57.741379   34792 type.go:168] "Request Body" body=""
	I1009 18:26:57.741447   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:57.741762   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:58.241330   34792 type.go:168] "Request Body" body=""
	I1009 18:26:58.241413   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:58.241723   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:58.741322   34792 type.go:168] "Request Body" body=""
	I1009 18:26:58.741396   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:58.741713   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:59.241600   34792 type.go:168] "Request Body" body=""
	I1009 18:26:59.241669   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:59.241990   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:59.242043   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:59.741668   34792 type.go:168] "Request Body" body=""
	I1009 18:26:59.741732   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:59.742052   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:00.241717   34792 type.go:168] "Request Body" body=""
	I1009 18:27:00.241783   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:00.242095   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:00.741931   34792 type.go:168] "Request Body" body=""
	I1009 18:27:00.742008   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:00.742337   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:01.242007   34792 type.go:168] "Request Body" body=""
	I1009 18:27:01.242099   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:01.242479   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:01.242534   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:01.741056   34792 type.go:168] "Request Body" body=""
	I1009 18:27:01.741158   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:01.741495   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:02.241218   34792 type.go:168] "Request Body" body=""
	I1009 18:27:02.241281   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:02.241609   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:02.741259   34792 type.go:168] "Request Body" body=""
	I1009 18:27:02.741340   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:02.741682   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:03.241295   34792 type.go:168] "Request Body" body=""
	I1009 18:27:03.241359   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:03.241698   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:03.741242   34792 type.go:168] "Request Body" body=""
	I1009 18:27:03.741308   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:03.741628   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:03.741679   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:04.241208   34792 type.go:168] "Request Body" body=""
	I1009 18:27:04.241270   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:04.241627   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:04.741229   34792 type.go:168] "Request Body" body=""
	I1009 18:27:04.741287   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:04.741583   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:05.241255   34792 type.go:168] "Request Body" body=""
	I1009 18:27:05.241340   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:05.241742   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:05.741635   34792 type.go:168] "Request Body" body=""
	I1009 18:27:05.741703   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:05.742066   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:05.742130   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:06.241658   34792 type.go:168] "Request Body" body=""
	I1009 18:27:06.241731   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:06.242079   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:06.741854   34792 type.go:168] "Request Body" body=""
	I1009 18:27:06.741922   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:06.742243   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:07.241927   34792 type.go:168] "Request Body" body=""
	I1009 18:27:07.241997   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:07.242459   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:07.741045   34792 type.go:168] "Request Body" body=""
	I1009 18:27:07.741126   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:07.741466   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:08.241033   34792 type.go:168] "Request Body" body=""
	I1009 18:27:08.241100   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:08.241458   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:08.241511   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:08.741034   34792 type.go:168] "Request Body" body=""
	I1009 18:27:08.741096   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:08.741406   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:09.241378   34792 type.go:168] "Request Body" body=""
	I1009 18:27:09.241439   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:09.241764   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:09.741349   34792 type.go:168] "Request Body" body=""
	I1009 18:27:09.741417   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:09.741711   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:10.241285   34792 type.go:168] "Request Body" body=""
	I1009 18:27:10.241365   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:10.241692   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:10.241753   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:10.741690   34792 type.go:168] "Request Body" body=""
	I1009 18:27:10.741757   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:10.742128   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:11.241848   34792 type.go:168] "Request Body" body=""
	I1009 18:27:11.241913   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:11.242250   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:11.741958   34792 type.go:168] "Request Body" body=""
	I1009 18:27:11.742022   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:11.742364   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:12.240970   34792 type.go:168] "Request Body" body=""
	I1009 18:27:12.241079   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:12.241437   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:12.741083   34792 type.go:168] "Request Body" body=""
	I1009 18:27:12.741169   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:12.741518   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:12.741570   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:13.241130   34792 type.go:168] "Request Body" body=""
	I1009 18:27:13.241246   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:13.241579   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:13.741161   34792 type.go:168] "Request Body" body=""
	I1009 18:27:13.741231   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:13.741554   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:14.241185   34792 type.go:168] "Request Body" body=""
	I1009 18:27:14.241247   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:14.241557   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:14.741128   34792 type.go:168] "Request Body" body=""
	I1009 18:27:14.741223   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:14.741560   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:14.741616   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:15.241160   34792 type.go:168] "Request Body" body=""
	I1009 18:27:15.241231   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:15.241537   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:15.741362   34792 type.go:168] "Request Body" body=""
	I1009 18:27:15.741426   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:15.741731   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:16.241332   34792 type.go:168] "Request Body" body=""
	I1009 18:27:16.241395   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:16.241711   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:16.741290   34792 type.go:168] "Request Body" body=""
	I1009 18:27:16.741362   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:16.741691   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:16.741746   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:17.241296   34792 type.go:168] "Request Body" body=""
	I1009 18:27:17.241365   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:17.241677   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:17.741260   34792 type.go:168] "Request Body" body=""
	I1009 18:27:17.741330   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:17.741645   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:18.241233   34792 type.go:168] "Request Body" body=""
	I1009 18:27:18.241315   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:18.241649   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:18.741254   34792 type.go:168] "Request Body" body=""
	I1009 18:27:18.741327   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:18.741641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:19.241576   34792 type.go:168] "Request Body" body=""
	I1009 18:27:19.241642   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:19.241965   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:19.242017   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:19.741671   34792 type.go:168] "Request Body" body=""
	I1009 18:27:19.741744   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:19.742057   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:20.241721   34792 type.go:168] "Request Body" body=""
	I1009 18:27:20.241782   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:20.242076   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:20.742009   34792 type.go:168] "Request Body" body=""
	I1009 18:27:20.742090   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:20.742453   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:21.241057   34792 type.go:168] "Request Body" body=""
	I1009 18:27:21.241122   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:21.241467   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:21.741089   34792 type.go:168] "Request Body" body=""
	I1009 18:27:21.741181   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:21.741490   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:21.741542   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:22.241108   34792 type.go:168] "Request Body" body=""
	I1009 18:27:22.241209   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:22.241541   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:22.741234   34792 type.go:168] "Request Body" body=""
	I1009 18:27:22.741302   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:22.741654   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:23.241319   34792 type.go:168] "Request Body" body=""
	I1009 18:27:23.241387   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:23.241701   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:23.741234   34792 type.go:168] "Request Body" body=""
	I1009 18:27:23.741296   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:23.741605   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:23.741658   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:24.241213   34792 type.go:168] "Request Body" body=""
	I1009 18:27:24.241289   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:24.241598   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:24.741228   34792 type.go:168] "Request Body" body=""
	I1009 18:27:24.741292   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:24.741613   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:25.241253   34792 type.go:168] "Request Body" body=""
	I1009 18:27:25.241322   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:25.241625   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:25.741545   34792 type.go:168] "Request Body" body=""
	I1009 18:27:25.741614   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:25.741927   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:25.742024   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:26.241505   34792 type.go:168] "Request Body" body=""
	I1009 18:27:26.241567   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:26.241878   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:26.741454   34792 type.go:168] "Request Body" body=""
	I1009 18:27:26.741518   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:26.741875   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:27.241441   34792 type.go:168] "Request Body" body=""
	I1009 18:27:27.241506   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:27.241818   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:27.741400   34792 type.go:168] "Request Body" body=""
	I1009 18:27:27.741470   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:27.741797   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:28.241401   34792 type.go:168] "Request Body" body=""
	I1009 18:27:28.241474   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:28.241808   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:28.241862   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:28.741402   34792 type.go:168] "Request Body" body=""
	I1009 18:27:28.741472   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:28.741806   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:29.241748   34792 type.go:168] "Request Body" body=""
	I1009 18:27:29.241819   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:29.242161   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:29.741821   34792 type.go:168] "Request Body" body=""
	I1009 18:27:29.741885   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:29.742231   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:30.241904   34792 type.go:168] "Request Body" body=""
	I1009 18:27:30.241974   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:30.242318   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:30.242382   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:30.741035   34792 type.go:168] "Request Body" body=""
	I1009 18:27:30.741108   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:30.741409   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:31.241068   34792 type.go:168] "Request Body" body=""
	I1009 18:27:31.241132   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:31.241479   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:31.741086   34792 type.go:168] "Request Body" body=""
	I1009 18:27:31.741176   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:31.741471   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:32.241219   34792 type.go:168] "Request Body" body=""
	I1009 18:27:32.241295   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:32.241610   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:32.741219   34792 type.go:168] "Request Body" body=""
	I1009 18:27:32.741298   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:32.741606   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:32.741661   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:33.241210   34792 type.go:168] "Request Body" body=""
	I1009 18:27:33.241276   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:33.241588   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:33.741182   34792 type.go:168] "Request Body" body=""
	I1009 18:27:33.741248   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:33.741547   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:34.241192   34792 type.go:168] "Request Body" body=""
	I1009 18:27:34.241262   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:34.241590   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:34.741212   34792 type.go:168] "Request Body" body=""
	I1009 18:27:34.741284   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:34.741609   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:35.241253   34792 type.go:168] "Request Body" body=""
	I1009 18:27:35.241323   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:35.241649   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:35.241703   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:35.741567   34792 type.go:168] "Request Body" body=""
	I1009 18:27:35.741632   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:35.741973   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:36.241654   34792 type.go:168] "Request Body" body=""
	I1009 18:27:36.241728   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:36.242025   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:36.741778   34792 type.go:168] "Request Body" body=""
	I1009 18:27:36.741844   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:36.742212   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:37.241852   34792 type.go:168] "Request Body" body=""
	I1009 18:27:37.241925   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:37.242276   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:37.242330   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:37.741978   34792 type.go:168] "Request Body" body=""
	I1009 18:27:37.742052   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:37.742377   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:38.240952   34792 type.go:168] "Request Body" body=""
	I1009 18:27:38.241027   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:38.241428   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:38.741115   34792 type.go:168] "Request Body" body=""
	I1009 18:27:38.741222   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:38.741569   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:39.241464   34792 type.go:168] "Request Body" body=""
	I1009 18:27:39.241531   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:39.241853   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:39.741475   34792 type.go:168] "Request Body" body=""
	I1009 18:27:39.741552   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:39.741888   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:39.741940   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:40.241482   34792 type.go:168] "Request Body" body=""
	I1009 18:27:40.241546   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:40.241865   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:40.741822   34792 type.go:168] "Request Body" body=""
	I1009 18:27:40.741912   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:40.742310   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:41.241924   34792 type.go:168] "Request Body" body=""
	I1009 18:27:41.241992   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:41.242352   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:41.742037   34792 type.go:168] "Request Body" body=""
	I1009 18:27:41.742123   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:41.742467   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:41.742533   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:42.241062   34792 type.go:168] "Request Body" body=""
	I1009 18:27:42.241131   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:42.241483   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:42.741199   34792 type.go:168] "Request Body" body=""
	I1009 18:27:42.741261   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:42.741576   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:43.241209   34792 type.go:168] "Request Body" body=""
	I1009 18:27:43.241285   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:43.241620   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:43.741257   34792 type.go:168] "Request Body" body=""
	I1009 18:27:43.741321   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:43.741675   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:44.241258   34792 type.go:168] "Request Body" body=""
	I1009 18:27:44.241325   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:44.241630   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:44.241684   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:44.741229   34792 type.go:168] "Request Body" body=""
	I1009 18:27:44.741292   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:44.741621   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:45.241009   34792 type.go:168] "Request Body" body=""
	I1009 18:27:45.241089   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:45.241464   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:45.741255   34792 type.go:168] "Request Body" body=""
	I1009 18:27:45.741321   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:45.741658   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:46.241261   34792 type.go:168] "Request Body" body=""
	I1009 18:27:46.241333   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:46.241687   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:46.241736   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:46.741271   34792 type.go:168] "Request Body" body=""
	I1009 18:27:46.741338   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:46.741695   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:47.241266   34792 type.go:168] "Request Body" body=""
	I1009 18:27:47.241341   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:47.241666   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:47.741243   34792 type.go:168] "Request Body" body=""
	I1009 18:27:47.741310   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:47.741653   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:48.241251   34792 type.go:168] "Request Body" body=""
	I1009 18:27:48.241342   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:48.241651   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:48.741262   34792 type.go:168] "Request Body" body=""
	I1009 18:27:48.741328   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:48.741647   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:48.741699   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:49.241692   34792 type.go:168] "Request Body" body=""
	I1009 18:27:49.241772   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:49.242116   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:49.741779   34792 type.go:168] "Request Body" body=""
	I1009 18:27:49.741846   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:49.742256   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:50.241914   34792 type.go:168] "Request Body" body=""
	I1009 18:27:50.241978   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:50.242357   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:50.741207   34792 type.go:168] "Request Body" body=""
	I1009 18:27:50.741284   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:50.741645   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:51.241236   34792 type.go:168] "Request Body" body=""
	I1009 18:27:51.241313   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:51.241642   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:51.241696   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:51.741256   34792 type.go:168] "Request Body" body=""
	I1009 18:27:51.741385   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:51.741740   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:52.241321   34792 type.go:168] "Request Body" body=""
	I1009 18:27:52.241392   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:52.241724   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:52.741315   34792 type.go:168] "Request Body" body=""
	I1009 18:27:52.741382   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:52.741729   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:53.241330   34792 type.go:168] "Request Body" body=""
	I1009 18:27:53.241398   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:53.241736   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:53.241797   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:53.741402   34792 type.go:168] "Request Body" body=""
	I1009 18:27:53.741465   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:53.741821   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:54.241418   34792 type.go:168] "Request Body" body=""
	I1009 18:27:54.241482   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:54.241803   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:54.741399   34792 type.go:168] "Request Body" body=""
	I1009 18:27:54.741462   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:54.741794   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:55.241395   34792 type.go:168] "Request Body" body=""
	I1009 18:27:55.241460   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:55.241801   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:55.241851   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:55.741689   34792 type.go:168] "Request Body" body=""
	I1009 18:27:55.741763   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:55.742091   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:56.241733   34792 type.go:168] "Request Body" body=""
	I1009 18:27:56.241801   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:56.242128   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:56.741823   34792 type.go:168] "Request Body" body=""
	I1009 18:27:56.741896   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:56.742277   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:57.241950   34792 type.go:168] "Request Body" body=""
	I1009 18:27:57.242025   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:57.242395   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:57.242451   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:57.741025   34792 type.go:168] "Request Body" body=""
	I1009 18:27:57.741093   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:57.741454   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:58.241127   34792 type.go:168] "Request Body" body=""
	I1009 18:27:58.241225   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:58.241560   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:58.741208   34792 type.go:168] "Request Body" body=""
	I1009 18:27:58.741281   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:58.741640   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:59.241113   34792 node_ready.go:38] duration metric: took 6m0.000256287s for node "functional-753440" to be "Ready" ...
	I1009 18:27:59.244464   34792 out.go:203] 
	W1009 18:27:59.246567   34792 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 18:27:59.246590   34792 out.go:285] * 
	* 
	W1009 18:27:59.248293   34792 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:27:59.250105   34792 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-amd64 start -p functional-753440 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m4.394917528s for "functional-753440" cluster.
I1009 18:27:59.761488   14880 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/SoftStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/SoftStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753440
helpers_test.go:243: (dbg) docker inspect functional-753440:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205",
	        "Created": "2025-10-09T18:13:38.612842612Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 29511,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:13:38.64668907Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/hostname",
	        "HostsPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/hosts",
	        "LogPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205-json.log",
	        "Name": "/functional-753440",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753440:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-753440",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205",
	                "LowerDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-753440",
	                "Source": "/var/lib/docker/volumes/functional-753440/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753440",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753440",
	                "name.minikube.sigs.k8s.io": "functional-753440",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d81e656cb7fd298b6be7b84ddafb7e6d0b2df1b9904e1c444b24eb780385409d",
	            "SandboxKey": "/var/run/docker/netns/d81e656cb7fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753440": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:52:a9:f3:ce:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d69cee380b2506f35d197ee18a95b90b110e191b547e1220873c5484ffc92ad3",
	                    "EndpointID": "2f780bc31b7359d4036c8b32e09c7f7657923ca8c46e8392506706282465c3ec",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753440",
	                        "694bf539948e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-753440 -n functional-753440
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-753440 -n functional-753440: exit status 2 (332.188082ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/SoftStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-753440 logs -n 25: (1.029670643s)
helpers_test.go:260: TestFunctional/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-240600                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-240600   │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │ 09 Oct 25 17:56 UTC │
	│ start   │ --download-only -p download-docker-360662 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-360662 │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │                     │
	│ delete  │ -p download-docker-360662                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-360662 │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │ 09 Oct 25 17:56 UTC │
	│ start   │ --download-only -p binary-mirror-609906 --alsologtostderr --binary-mirror http://127.0.0.1:44531 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-609906   │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │                     │
	│ delete  │ -p binary-mirror-609906                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-609906   │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │ 09 Oct 25 17:56 UTC │
	│ addons  │ enable dashboard -p addons-246638                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-246638          │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │                     │
	│ addons  │ disable dashboard -p addons-246638                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-246638          │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │                     │
	│ start   │ -p addons-246638 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-246638          │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │                     │
	│ delete  │ -p addons-246638                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-246638          │ jenkins │ v1.37.0 │ 09 Oct 25 18:04 UTC │ 09 Oct 25 18:05 UTC │
	│ start   │ -p nospam-663194 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-663194 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                  │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:05 UTC │                     │
	│ start   │ nospam-663194 --log_dir /tmp/nospam-663194 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │                     │
	│ start   │ nospam-663194 --log_dir /tmp/nospam-663194 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │                     │
	│ start   │ nospam-663194 --log_dir /tmp/nospam-663194 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │                     │
	│ pause   │ nospam-663194 --log_dir /tmp/nospam-663194 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ pause   │ nospam-663194 --log_dir /tmp/nospam-663194 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ pause   │ nospam-663194 --log_dir /tmp/nospam-663194 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ unpause │ nospam-663194 --log_dir /tmp/nospam-663194 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ unpause │ nospam-663194 --log_dir /tmp/nospam-663194 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ unpause │ nospam-663194 --log_dir /tmp/nospam-663194 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ stop    │ nospam-663194 --log_dir /tmp/nospam-663194 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ stop    │ nospam-663194 --log_dir /tmp/nospam-663194 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ stop    │ nospam-663194 --log_dir /tmp/nospam-663194 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ delete  │ -p nospam-663194                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ start   │ -p functional-753440 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                            │ functional-753440      │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │                     │
	│ start   │ -p functional-753440 --alsologtostderr -v=8                                                                                                                                                                                                                                                                                                                                                                                                                              │ functional-753440      │ jenkins │ v1.37.0 │ 09 Oct 25 18:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:21:55
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:21:55.407242   34792 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:21:55.407482   34792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:21:55.407490   34792 out.go:374] Setting ErrFile to fd 2...
	I1009 18:21:55.407494   34792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:21:55.407669   34792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:21:55.408109   34792 out.go:368] Setting JSON to false
	I1009 18:21:55.408948   34792 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3863,"bootTime":1760030252,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:21:55.409029   34792 start.go:141] virtualization: kvm guest
	I1009 18:21:55.411208   34792 out.go:179] * [functional-753440] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:21:55.412706   34792 notify.go:220] Checking for updates...
	I1009 18:21:55.412728   34792 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:21:55.414107   34792 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:21:55.415609   34792 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:21:55.417005   34792 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:21:55.418411   34792 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:21:55.419884   34792 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:21:55.421538   34792 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:21:55.421658   34792 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:21:55.445068   34792 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:21:55.445204   34792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:21:55.504624   34792 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:21:55.494450296 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:21:55.504746   34792 docker.go:318] overlay module found
	I1009 18:21:55.507261   34792 out.go:179] * Using the docker driver based on existing profile
	I1009 18:21:55.508504   34792 start.go:305] selected driver: docker
	I1009 18:21:55.508518   34792 start.go:925] validating driver "docker" against &{Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:21:55.508594   34792 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:21:55.508665   34792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:21:55.566793   34792 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:21:55.557358643 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:21:55.567631   34792 cni.go:84] Creating CNI manager for ""
	I1009 18:21:55.567714   34792 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:21:55.567780   34792 start.go:349] cluster config:
	{Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:21:55.569913   34792 out.go:179] * Starting "functional-753440" primary control-plane node in "functional-753440" cluster
	I1009 18:21:55.571250   34792 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:21:55.572672   34792 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:21:55.573890   34792 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:21:55.573921   34792 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:21:55.573933   34792 cache.go:64] Caching tarball of preloaded images
	I1009 18:21:55.573992   34792 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:21:55.574016   34792 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:21:55.574025   34792 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:21:55.574109   34792 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/config.json ...
	I1009 18:21:55.593603   34792 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 18:21:55.593631   34792 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 18:21:55.593646   34792 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:21:55.593672   34792 start.go:360] acquireMachinesLock for functional-753440: {Name:mka6dd10318522f9d68a16550e4b04812fa22004 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:21:55.593732   34792 start.go:364] duration metric: took 38.489µs to acquireMachinesLock for "functional-753440"
	I1009 18:21:55.593749   34792 start.go:96] Skipping create...Using existing machine configuration
	I1009 18:21:55.593758   34792 fix.go:54] fixHost starting: 
	I1009 18:21:55.593970   34792 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
	I1009 18:21:55.610925   34792 fix.go:112] recreateIfNeeded on functional-753440: state=Running err=<nil>
	W1009 18:21:55.610951   34792 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 18:21:55.612681   34792 out.go:252] * Updating the running docker "functional-753440" container ...
	I1009 18:21:55.612704   34792 machine.go:93] provisionDockerMachine start ...
	I1009 18:21:55.612764   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:55.630174   34792 main.go:141] libmachine: Using SSH client type: native
	I1009 18:21:55.630389   34792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:21:55.630401   34792 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:21:55.773949   34792 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753440
	
	I1009 18:21:55.773975   34792 ubuntu.go:182] provisioning hostname "functional-753440"
	I1009 18:21:55.774031   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:55.792726   34792 main.go:141] libmachine: Using SSH client type: native
	I1009 18:21:55.792949   34792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:21:55.792962   34792 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-753440 && echo "functional-753440" | sudo tee /etc/hostname
	I1009 18:21:55.945969   34792 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753440
	
	I1009 18:21:55.946040   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:55.963600   34792 main.go:141] libmachine: Using SSH client type: native
	I1009 18:21:55.963821   34792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:21:55.963839   34792 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753440' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753440/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753440' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:21:56.108677   34792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:21:56.108700   34792 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 18:21:56.108717   34792 ubuntu.go:190] setting up certificates
	I1009 18:21:56.108727   34792 provision.go:84] configureAuth start
	I1009 18:21:56.108783   34792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753440
	I1009 18:21:56.127107   34792 provision.go:143] copyHostCerts
	I1009 18:21:56.127166   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:21:56.127197   34792 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 18:21:56.127212   34792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:21:56.127290   34792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 18:21:56.127394   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:21:56.127416   34792 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 18:21:56.127420   34792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:21:56.127449   34792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 18:21:56.127507   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:21:56.127523   34792 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 18:21:56.127526   34792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:21:56.127549   34792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 18:21:56.127598   34792 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.functional-753440 san=[127.0.0.1 192.168.49.2 functional-753440 localhost minikube]
	I1009 18:21:56.380428   34792 provision.go:177] copyRemoteCerts
	I1009 18:21:56.380482   34792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:21:56.380515   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:56.398054   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:56.500395   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 18:21:56.500448   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 18:21:56.517603   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 18:21:56.517655   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 18:21:56.534349   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 18:21:56.534397   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 18:21:56.551305   34792 provision.go:87] duration metric: took 442.551304ms to configureAuth
	I1009 18:21:56.551330   34792 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:21:56.551498   34792 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:21:56.551579   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:56.568651   34792 main.go:141] libmachine: Using SSH client type: native
	I1009 18:21:56.568866   34792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:21:56.568881   34792 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:21:56.838390   34792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:21:56.838414   34792 machine.go:96] duration metric: took 1.225703269s to provisionDockerMachine
	I1009 18:21:56.838426   34792 start.go:293] postStartSetup for "functional-753440" (driver="docker")
	I1009 18:21:56.838437   34792 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:21:56.838510   34792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:21:56.838559   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:56.856450   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:56.959658   34792 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:21:56.963119   34792 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1009 18:21:56.963150   34792 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1009 18:21:56.963158   34792 command_runner.go:130] > VERSION_ID="12"
	I1009 18:21:56.963165   34792 command_runner.go:130] > VERSION="12 (bookworm)"
	I1009 18:21:56.963174   34792 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1009 18:21:56.963179   34792 command_runner.go:130] > ID=debian
	I1009 18:21:56.963186   34792 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1009 18:21:56.963194   34792 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1009 18:21:56.963212   34792 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1009 18:21:56.963315   34792 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:21:56.963334   34792 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:21:56.963342   34792 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 18:21:56.963382   34792 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 18:21:56.963448   34792 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 18:21:56.963463   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /etc/ssl/certs/148802.pem
	I1009 18:21:56.963529   34792 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/test/nested/copy/14880/hosts -> hosts in /etc/test/nested/copy/14880
	I1009 18:21:56.963535   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/test/nested/copy/14880/hosts -> /etc/test/nested/copy/14880/hosts
	I1009 18:21:56.963565   34792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/14880
	I1009 18:21:56.970888   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:21:56.988730   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/test/nested/copy/14880/hosts --> /etc/test/nested/copy/14880/hosts (40 bytes)
	I1009 18:21:57.005907   34792 start.go:296] duration metric: took 167.469505ms for postStartSetup
	I1009 18:21:57.005971   34792 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:21:57.006025   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:57.023806   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:57.123166   34792 command_runner.go:130] > 39%
	I1009 18:21:57.123235   34792 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:21:57.127917   34792 command_runner.go:130] > 179G
	I1009 18:21:57.127948   34792 fix.go:56] duration metric: took 1.534189396s for fixHost
	I1009 18:21:57.127960   34792 start.go:83] releasing machines lock for "functional-753440", held for 1.534218366s
	I1009 18:21:57.128034   34792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753440
	I1009 18:21:57.145978   34792 ssh_runner.go:195] Run: cat /version.json
	I1009 18:21:57.146019   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:57.146063   34792 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:21:57.146159   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:57.164302   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:57.164547   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:57.263542   34792 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759745255-21703", "minikube_version": "v1.37.0", "commit": "a51fe4b7ffc88febd8814e8831f38772e976d097"}
	I1009 18:21:57.263690   34792 ssh_runner.go:195] Run: systemctl --version
	I1009 18:21:57.316955   34792 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1009 18:21:57.317002   34792 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1009 18:21:57.317022   34792 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1009 18:21:57.317074   34792 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:21:57.353021   34792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 18:21:57.357737   34792 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1009 18:21:57.357788   34792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:21:57.357834   34792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:21:57.365811   34792 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 18:21:57.365833   34792 start.go:495] detecting cgroup driver to use...
	I1009 18:21:57.365861   34792 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:21:57.365903   34792 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:21:57.380237   34792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:21:57.392796   34792 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:21:57.392859   34792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:21:57.407315   34792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:21:57.419892   34792 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:21:57.506572   34792 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:21:57.589596   34792 docker.go:234] disabling docker service ...
	I1009 18:21:57.589673   34792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:21:57.603725   34792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:21:57.615780   34792 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:21:57.696218   34792 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:21:57.781915   34792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:21:57.794534   34792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:21:57.808497   34792 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1009 18:21:57.808534   34792 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:21:57.808589   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.817764   34792 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 18:21:57.817814   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.827115   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.836066   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.844563   34792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:21:57.852458   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.861227   34792 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.869900   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.878917   34792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:21:57.886570   34792 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1009 18:21:57.886644   34792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:21:57.894517   34792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:21:57.979064   34792 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:21:58.090717   34792 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:21:58.090783   34792 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:21:58.095044   34792 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1009 18:21:58.095068   34792 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1009 18:21:58.095074   34792 command_runner.go:130] > Device: 0,59	Inode: 3803        Links: 1
	I1009 18:21:58.095080   34792 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 18:21:58.095085   34792 command_runner.go:130] > Access: 2025-10-09 18:21:58.072690390 +0000
	I1009 18:21:58.095093   34792 command_runner.go:130] > Modify: 2025-10-09 18:21:58.072690390 +0000
	I1009 18:21:58.095101   34792 command_runner.go:130] > Change: 2025-10-09 18:21:58.072690390 +0000
	I1009 18:21:58.095108   34792 command_runner.go:130] >  Birth: 2025-10-09 18:21:58.072690390 +0000
	I1009 18:21:58.095130   34792 start.go:563] Will wait 60s for crictl version
	I1009 18:21:58.095214   34792 ssh_runner.go:195] Run: which crictl
	I1009 18:21:58.099101   34792 command_runner.go:130] > /usr/local/bin/crictl
	I1009 18:21:58.099187   34792 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:21:58.122816   34792 command_runner.go:130] > Version:  0.1.0
	I1009 18:21:58.122840   34792 command_runner.go:130] > RuntimeName:  cri-o
	I1009 18:21:58.122845   34792 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1009 18:21:58.122850   34792 command_runner.go:130] > RuntimeApiVersion:  v1
	I1009 18:21:58.122867   34792 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:21:58.122920   34792 ssh_runner.go:195] Run: crio --version
	I1009 18:21:58.149899   34792 command_runner.go:130] > crio version 1.34.1
	I1009 18:21:58.149922   34792 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1009 18:21:58.149928   34792 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1009 18:21:58.149933   34792 command_runner.go:130] >    GitTreeState:   dirty
	I1009 18:21:58.149944   34792 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1009 18:21:58.149949   34792 command_runner.go:130] >    GoVersion:      go1.24.6
	I1009 18:21:58.149952   34792 command_runner.go:130] >    Compiler:       gc
	I1009 18:21:58.149957   34792 command_runner.go:130] >    Platform:       linux/amd64
	I1009 18:21:58.149961   34792 command_runner.go:130] >    Linkmode:       static
	I1009 18:21:58.149964   34792 command_runner.go:130] >    BuildTags:
	I1009 18:21:58.149967   34792 command_runner.go:130] >      static
	I1009 18:21:58.149971   34792 command_runner.go:130] >      netgo
	I1009 18:21:58.149975   34792 command_runner.go:130] >      osusergo
	I1009 18:21:58.149978   34792 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1009 18:21:58.149982   34792 command_runner.go:130] >      seccomp
	I1009 18:21:58.149988   34792 command_runner.go:130] >      apparmor
	I1009 18:21:58.149991   34792 command_runner.go:130] >      selinux
	I1009 18:21:58.149998   34792 command_runner.go:130] >    LDFlags:          unknown
	I1009 18:21:58.150002   34792 command_runner.go:130] >    SeccompEnabled:   true
	I1009 18:21:58.150007   34792 command_runner.go:130] >    AppArmorEnabled:  false
	I1009 18:21:58.151351   34792 ssh_runner.go:195] Run: crio --version
	I1009 18:21:58.178662   34792 command_runner.go:130] > crio version 1.34.1
	I1009 18:21:58.178683   34792 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1009 18:21:58.178689   34792 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1009 18:21:58.178693   34792 command_runner.go:130] >    GitTreeState:   dirty
	I1009 18:21:58.178698   34792 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1009 18:21:58.178702   34792 command_runner.go:130] >    GoVersion:      go1.24.6
	I1009 18:21:58.178706   34792 command_runner.go:130] >    Compiler:       gc
	I1009 18:21:58.178714   34792 command_runner.go:130] >    Platform:       linux/amd64
	I1009 18:21:58.178718   34792 command_runner.go:130] >    Linkmode:       static
	I1009 18:21:58.178721   34792 command_runner.go:130] >    BuildTags:
	I1009 18:21:58.178724   34792 command_runner.go:130] >      static
	I1009 18:21:58.178728   34792 command_runner.go:130] >      netgo
	I1009 18:21:58.178732   34792 command_runner.go:130] >      osusergo
	I1009 18:21:58.178735   34792 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1009 18:21:58.178739   34792 command_runner.go:130] >      seccomp
	I1009 18:21:58.178742   34792 command_runner.go:130] >      apparmor
	I1009 18:21:58.178757   34792 command_runner.go:130] >      selinux
	I1009 18:21:58.178764   34792 command_runner.go:130] >    LDFlags:          unknown
	I1009 18:21:58.178768   34792 command_runner.go:130] >    SeccompEnabled:   true
	I1009 18:21:58.178771   34792 command_runner.go:130] >    AppArmorEnabled:  false
	I1009 18:21:58.181232   34792 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:21:58.182844   34792 cli_runner.go:164] Run: docker network inspect functional-753440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:21:58.200852   34792 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:21:58.205024   34792 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1009 18:21:58.205096   34792 kubeadm.go:883] updating cluster {Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:21:58.205232   34792 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:21:58.205276   34792 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:21:58.234303   34792 command_runner.go:130] > {
	I1009 18:21:58.234338   34792 command_runner.go:130] >   "images":  [
	I1009 18:21:58.234345   34792 command_runner.go:130] >     {
	I1009 18:21:58.234355   34792 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1009 18:21:58.234362   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.234369   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1009 18:21:58.234373   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234378   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.234388   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1009 18:21:58.234400   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1009 18:21:58.234409   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234417   34792 command_runner.go:130] >       "size":  "109379124",
	I1009 18:21:58.234426   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.234435   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.234443   34792 command_runner.go:130] >     },
	I1009 18:21:58.234449   34792 command_runner.go:130] >     {
	I1009 18:21:58.234460   34792 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1009 18:21:58.234468   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.234478   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1009 18:21:58.234486   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234494   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.234509   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1009 18:21:58.234523   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1009 18:21:58.234532   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234539   34792 command_runner.go:130] >       "size":  "31470524",
	I1009 18:21:58.234548   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.234565   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.234581   34792 command_runner.go:130] >     },
	I1009 18:21:58.234590   34792 command_runner.go:130] >     {
	I1009 18:21:58.234600   34792 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1009 18:21:58.234610   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.234619   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1009 18:21:58.234627   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234635   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.234649   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1009 18:21:58.234665   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1009 18:21:58.234673   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234680   34792 command_runner.go:130] >       "size":  "76103547",
	I1009 18:21:58.234689   34792 command_runner.go:130] >       "username":  "nonroot",
	I1009 18:21:58.234697   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.234713   34792 command_runner.go:130] >     },
	I1009 18:21:58.234721   34792 command_runner.go:130] >     {
	I1009 18:21:58.234731   34792 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1009 18:21:58.234740   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.234749   34792 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1009 18:21:58.234757   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234765   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.234780   34792 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1009 18:21:58.234794   34792 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1009 18:21:58.234802   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234809   34792 command_runner.go:130] >       "size":  "195976448",
	I1009 18:21:58.234817   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.234824   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.234833   34792 command_runner.go:130] >       },
	I1009 18:21:58.234849   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.234858   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.234864   34792 command_runner.go:130] >     },
	I1009 18:21:58.234871   34792 command_runner.go:130] >     {
	I1009 18:21:58.234882   34792 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1009 18:21:58.234891   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.234906   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1009 18:21:58.234914   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234921   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.234936   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1009 18:21:58.234952   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1009 18:21:58.234960   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234967   34792 command_runner.go:130] >       "size":  "89046001",
	I1009 18:21:58.234976   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.234984   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.234991   34792 command_runner.go:130] >       },
	I1009 18:21:58.234999   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.235008   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.235015   34792 command_runner.go:130] >     },
	I1009 18:21:58.235023   34792 command_runner.go:130] >     {
	I1009 18:21:58.235033   34792 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1009 18:21:58.235042   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.235052   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1009 18:21:58.235059   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235065   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.235078   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1009 18:21:58.235098   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1009 18:21:58.235106   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235113   34792 command_runner.go:130] >       "size":  "76004181",
	I1009 18:21:58.235122   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.235130   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.235152   34792 command_runner.go:130] >       },
	I1009 18:21:58.235159   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.235168   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.235174   34792 command_runner.go:130] >     },
	I1009 18:21:58.235183   34792 command_runner.go:130] >     {
	I1009 18:21:58.235193   34792 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1009 18:21:58.235202   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.235211   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1009 18:21:58.235227   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235236   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.235248   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1009 18:21:58.235262   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1009 18:21:58.235271   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235278   34792 command_runner.go:130] >       "size":  "73138073",
	I1009 18:21:58.235286   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.235294   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.235302   34792 command_runner.go:130] >     },
	I1009 18:21:58.235314   34792 command_runner.go:130] >     {
	I1009 18:21:58.235326   34792 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1009 18:21:58.235333   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.235344   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1009 18:21:58.235352   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235359   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.235373   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1009 18:21:58.235408   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1009 18:21:58.235416   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235424   34792 command_runner.go:130] >       "size":  "53844823",
	I1009 18:21:58.235433   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.235441   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.235450   34792 command_runner.go:130] >       },
	I1009 18:21:58.235456   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.235464   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.235470   34792 command_runner.go:130] >     },
	I1009 18:21:58.235477   34792 command_runner.go:130] >     {
	I1009 18:21:58.235488   34792 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1009 18:21:58.235496   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.235508   34792 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1009 18:21:58.235515   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235522   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.235536   34792 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1009 18:21:58.235550   34792 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1009 18:21:58.235566   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235576   34792 command_runner.go:130] >       "size":  "742092",
	I1009 18:21:58.235582   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.235592   34792 command_runner.go:130] >         "value":  "65535"
	I1009 18:21:58.235599   34792 command_runner.go:130] >       },
	I1009 18:21:58.235606   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.235615   34792 command_runner.go:130] >       "pinned":  true
	I1009 18:21:58.235621   34792 command_runner.go:130] >     }
	I1009 18:21:58.235627   34792 command_runner.go:130] >   ]
	I1009 18:21:58.235633   34792 command_runner.go:130] > }
	I1009 18:21:58.236008   34792 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:21:58.236027   34792 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:21:58.236090   34792 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:21:58.260405   34792 command_runner.go:130] > {
	I1009 18:21:58.260434   34792 command_runner.go:130] >   "images":  [
	I1009 18:21:58.260440   34792 command_runner.go:130] >     {
	I1009 18:21:58.260454   34792 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1009 18:21:58.260464   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.260473   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1009 18:21:58.260483   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260490   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.260505   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1009 18:21:58.260520   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1009 18:21:58.260529   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260540   34792 command_runner.go:130] >       "size":  "109379124",
	I1009 18:21:58.260550   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.260560   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.260566   34792 command_runner.go:130] >     },
	I1009 18:21:58.260575   34792 command_runner.go:130] >     {
	I1009 18:21:58.260586   34792 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1009 18:21:58.260593   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.260606   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1009 18:21:58.260615   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260624   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.260639   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1009 18:21:58.260653   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1009 18:21:58.260661   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260667   34792 command_runner.go:130] >       "size":  "31470524",
	I1009 18:21:58.260674   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.260681   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.260689   34792 command_runner.go:130] >     },
	I1009 18:21:58.260698   34792 command_runner.go:130] >     {
	I1009 18:21:58.260711   34792 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1009 18:21:58.260721   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.260732   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1009 18:21:58.260740   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260746   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.260759   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1009 18:21:58.260769   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1009 18:21:58.260777   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260785   34792 command_runner.go:130] >       "size":  "76103547",
	I1009 18:21:58.260794   34792 command_runner.go:130] >       "username":  "nonroot",
	I1009 18:21:58.260804   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.260812   34792 command_runner.go:130] >     },
	I1009 18:21:58.260817   34792 command_runner.go:130] >     {
	I1009 18:21:58.260829   34792 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1009 18:21:58.260838   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.260848   34792 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1009 18:21:58.260854   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260861   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.260876   34792 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1009 18:21:58.260890   34792 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1009 18:21:58.260897   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260904   34792 command_runner.go:130] >       "size":  "195976448",
	I1009 18:21:58.260914   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.260923   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.260931   34792 command_runner.go:130] >       },
	I1009 18:21:58.260939   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.260949   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.260957   34792 command_runner.go:130] >     },
	I1009 18:21:58.260965   34792 command_runner.go:130] >     {
	I1009 18:21:58.260974   34792 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1009 18:21:58.260984   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.260992   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1009 18:21:58.261000   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261007   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.261018   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1009 18:21:58.261032   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1009 18:21:58.261040   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261047   34792 command_runner.go:130] >       "size":  "89046001",
	I1009 18:21:58.261056   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.261066   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.261073   34792 command_runner.go:130] >       },
	I1009 18:21:58.261083   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.261093   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.261101   34792 command_runner.go:130] >     },
	I1009 18:21:58.261107   34792 command_runner.go:130] >     {
	I1009 18:21:58.261119   34792 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1009 18:21:58.261128   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.261153   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1009 18:21:58.261159   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261169   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.261181   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1009 18:21:58.261196   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1009 18:21:58.261205   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261214   34792 command_runner.go:130] >       "size":  "76004181",
	I1009 18:21:58.261223   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.261234   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.261243   34792 command_runner.go:130] >       },
	I1009 18:21:58.261249   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.261258   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.261266   34792 command_runner.go:130] >     },
	I1009 18:21:58.261270   34792 command_runner.go:130] >     {
	I1009 18:21:58.261283   34792 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1009 18:21:58.261295   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.261306   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1009 18:21:58.261314   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261321   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.261334   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1009 18:21:58.261349   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1009 18:21:58.261356   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261364   34792 command_runner.go:130] >       "size":  "73138073",
	I1009 18:21:58.261372   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.261379   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.261384   34792 command_runner.go:130] >     },
	I1009 18:21:58.261393   34792 command_runner.go:130] >     {
	I1009 18:21:58.261402   34792 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1009 18:21:58.261409   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.261417   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1009 18:21:58.261422   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261428   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.261439   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1009 18:21:58.261460   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1009 18:21:58.261467   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261473   34792 command_runner.go:130] >       "size":  "53844823",
	I1009 18:21:58.261482   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.261491   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.261498   34792 command_runner.go:130] >       },
	I1009 18:21:58.261507   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.261516   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.261525   34792 command_runner.go:130] >     },
	I1009 18:21:58.261533   34792 command_runner.go:130] >     {
	I1009 18:21:58.261543   34792 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1009 18:21:58.261549   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.261555   34792 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1009 18:21:58.261563   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261570   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.261584   34792 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1009 18:21:58.261597   34792 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1009 18:21:58.261607   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261614   34792 command_runner.go:130] >       "size":  "742092",
	I1009 18:21:58.261620   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.261626   34792 command_runner.go:130] >         "value":  "65535"
	I1009 18:21:58.261632   34792 command_runner.go:130] >       },
	I1009 18:21:58.261636   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.261641   34792 command_runner.go:130] >       "pinned":  true
	I1009 18:21:58.261649   34792 command_runner.go:130] >     }
	I1009 18:21:58.261655   34792 command_runner.go:130] >   ]
	I1009 18:21:58.261663   34792 command_runner.go:130] > }
	I1009 18:21:58.262011   34792 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:21:58.262027   34792 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:21:58.262034   34792 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1009 18:21:58.262124   34792 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-753440 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:21:58.262213   34792 ssh_runner.go:195] Run: crio config
	I1009 18:21:58.302300   34792 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1009 18:21:58.302331   34792 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1009 18:21:58.302340   34792 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1009 18:21:58.302345   34792 command_runner.go:130] > #
	I1009 18:21:58.302356   34792 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1009 18:21:58.302365   34792 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1009 18:21:58.302374   34792 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1009 18:21:58.302388   34792 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1009 18:21:58.302395   34792 command_runner.go:130] > # reload'.
	I1009 18:21:58.302413   34792 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1009 18:21:58.302424   34792 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1009 18:21:58.302434   34792 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1009 18:21:58.302446   34792 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1009 18:21:58.302451   34792 command_runner.go:130] > [crio]
	I1009 18:21:58.302460   34792 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1009 18:21:58.302491   34792 command_runner.go:130] > # containers images, in this directory.
	I1009 18:21:58.302515   34792 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1009 18:21:58.302526   34792 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1009 18:21:58.302534   34792 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1009 18:21:58.302549   34792 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1009 18:21:58.302558   34792 command_runner.go:130] > # imagestore = ""
	I1009 18:21:58.302569   34792 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1009 18:21:58.302588   34792 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1009 18:21:58.302596   34792 command_runner.go:130] > # storage_driver = "overlay"
	I1009 18:21:58.302604   34792 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1009 18:21:58.302618   34792 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1009 18:21:58.302625   34792 command_runner.go:130] > # storage_option = [
	I1009 18:21:58.302630   34792 command_runner.go:130] > # ]
	I1009 18:21:58.302640   34792 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1009 18:21:58.302649   34792 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1009 18:21:58.302660   34792 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1009 18:21:58.302668   34792 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1009 18:21:58.302681   34792 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1009 18:21:58.302689   34792 command_runner.go:130] > # always happen on a node reboot
	I1009 18:21:58.302700   34792 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1009 18:21:58.302714   34792 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1009 18:21:58.302727   34792 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1009 18:21:58.302738   34792 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1009 18:21:58.302745   34792 command_runner.go:130] > # version_file_persist = ""
	I1009 18:21:58.302760   34792 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1009 18:21:58.302779   34792 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1009 18:21:58.302786   34792 command_runner.go:130] > # internal_wipe = true
	I1009 18:21:58.302800   34792 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1009 18:21:58.302809   34792 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1009 18:21:58.302823   34792 command_runner.go:130] > # internal_repair = true
	I1009 18:21:58.302832   34792 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1009 18:21:58.302841   34792 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1009 18:21:58.302850   34792 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1009 18:21:58.302858   34792 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1009 18:21:58.302871   34792 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1009 18:21:58.302877   34792 command_runner.go:130] > [crio.api]
	I1009 18:21:58.302889   34792 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1009 18:21:58.302895   34792 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1009 18:21:58.302903   34792 command_runner.go:130] > # IP address on which the stream server will listen.
	I1009 18:21:58.302908   34792 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1009 18:21:58.302918   34792 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1009 18:21:58.302922   34792 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1009 18:21:58.302928   34792 command_runner.go:130] > # stream_port = "0"
	I1009 18:21:58.302935   34792 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1009 18:21:58.302943   34792 command_runner.go:130] > # stream_enable_tls = false
	I1009 18:21:58.302953   34792 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1009 18:21:58.302963   34792 command_runner.go:130] > # stream_idle_timeout = ""
	I1009 18:21:58.302972   34792 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1009 18:21:58.302984   34792 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1009 18:21:58.303003   34792 command_runner.go:130] > # stream_tls_cert = ""
	I1009 18:21:58.303014   34792 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1009 18:21:58.303019   34792 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1009 18:21:58.303024   34792 command_runner.go:130] > # stream_tls_key = ""
	I1009 18:21:58.303031   34792 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1009 18:21:58.303041   34792 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1009 18:21:58.303054   34792 command_runner.go:130] > # automatically pick up the changes.
	I1009 18:21:58.303061   34792 command_runner.go:130] > # stream_tls_ca = ""
	I1009 18:21:58.303083   34792 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1009 18:21:58.303094   34792 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1009 18:21:58.303103   34792 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1009 18:21:58.303111   34792 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1009 18:21:58.303120   34792 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1009 18:21:58.303130   34792 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1009 18:21:58.303156   34792 command_runner.go:130] > [crio.runtime]
	I1009 18:21:58.303167   34792 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1009 18:21:58.303176   34792 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1009 18:21:58.303182   34792 command_runner.go:130] > # "nofile=1024:2048"
	I1009 18:21:58.303192   34792 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1009 18:21:58.303201   34792 command_runner.go:130] > # default_ulimits = [
	I1009 18:21:58.303207   34792 command_runner.go:130] > # ]
	I1009 18:21:58.303219   34792 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1009 18:21:58.303225   34792 command_runner.go:130] > # no_pivot = false
	I1009 18:21:58.303234   34792 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1009 18:21:58.303261   34792 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1009 18:21:58.303272   34792 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1009 18:21:58.303282   34792 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1009 18:21:58.303294   34792 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1009 18:21:58.303307   34792 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1009 18:21:58.303315   34792 command_runner.go:130] > # conmon = ""
	I1009 18:21:58.303321   34792 command_runner.go:130] > # Cgroup setting for conmon
	I1009 18:21:58.303330   34792 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1009 18:21:58.303336   34792 command_runner.go:130] > conmon_cgroup = "pod"
	I1009 18:21:58.303344   34792 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1009 18:21:58.303351   34792 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1009 18:21:58.303361   34792 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1009 18:21:58.303366   34792 command_runner.go:130] > # conmon_env = [
	I1009 18:21:58.303370   34792 command_runner.go:130] > # ]
	I1009 18:21:58.303377   34792 command_runner.go:130] > # Additional environment variables to set for all the
	I1009 18:21:58.303389   34792 command_runner.go:130] > # containers. These are overridden if set in the
	I1009 18:21:58.303398   34792 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1009 18:21:58.303404   34792 command_runner.go:130] > # default_env = [
	I1009 18:21:58.303408   34792 command_runner.go:130] > # ]
	I1009 18:21:58.303417   34792 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1009 18:21:58.303434   34792 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1009 18:21:58.303443   34792 command_runner.go:130] > # selinux = false
	I1009 18:21:58.303454   34792 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1009 18:21:58.303468   34792 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1009 18:21:58.303479   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.303489   34792 command_runner.go:130] > # seccomp_profile = ""
	I1009 18:21:58.303500   34792 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1009 18:21:58.303513   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.303520   34792 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1009 18:21:58.303530   34792 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1009 18:21:58.303543   34792 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1009 18:21:58.303553   34792 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1009 18:21:58.303567   34792 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1009 18:21:58.303578   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.303586   34792 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1009 18:21:58.303597   34792 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1009 18:21:58.303603   34792 command_runner.go:130] > # the cgroup blockio controller.
	I1009 18:21:58.303610   34792 command_runner.go:130] > # blockio_config_file = ""
	I1009 18:21:58.303625   34792 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1009 18:21:58.303631   34792 command_runner.go:130] > # blockio parameters.
	I1009 18:21:58.303639   34792 command_runner.go:130] > # blockio_reload = false
	I1009 18:21:58.303649   34792 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1009 18:21:58.303659   34792 command_runner.go:130] > # irqbalance daemon.
	I1009 18:21:58.303667   34792 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1009 18:21:58.303718   34792 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1009 18:21:58.303738   34792 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1009 18:21:58.303748   34792 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1009 18:21:58.303756   34792 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1009 18:21:58.303765   34792 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1009 18:21:58.303772   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.303777   34792 command_runner.go:130] > # rdt_config_file = ""
	I1009 18:21:58.303787   34792 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1009 18:21:58.303793   34792 command_runner.go:130] > # cgroup_manager = "systemd"
	I1009 18:21:58.303802   34792 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1009 18:21:58.303809   34792 command_runner.go:130] > # separate_pull_cgroup = ""
	I1009 18:21:58.303817   34792 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1009 18:21:58.303827   34792 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1009 18:21:58.303836   34792 command_runner.go:130] > # will be added.
	I1009 18:21:58.303844   34792 command_runner.go:130] > # default_capabilities = [
	I1009 18:21:58.303853   34792 command_runner.go:130] > # 	"CHOWN",
	I1009 18:21:58.303860   34792 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1009 18:21:58.303868   34792 command_runner.go:130] > # 	"FSETID",
	I1009 18:21:58.303874   34792 command_runner.go:130] > # 	"FOWNER",
	I1009 18:21:58.303883   34792 command_runner.go:130] > # 	"SETGID",
	I1009 18:21:58.303899   34792 command_runner.go:130] > # 	"SETUID",
	I1009 18:21:58.303908   34792 command_runner.go:130] > # 	"SETPCAP",
	I1009 18:21:58.303916   34792 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1009 18:21:58.303925   34792 command_runner.go:130] > # 	"KILL",
	I1009 18:21:58.303931   34792 command_runner.go:130] > # ]
	I1009 18:21:58.303944   34792 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1009 18:21:58.303958   34792 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1009 18:21:58.303969   34792 command_runner.go:130] > # add_inheritable_capabilities = false
	I1009 18:21:58.303982   34792 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1009 18:21:58.304001   34792 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1009 18:21:58.304011   34792 command_runner.go:130] > default_sysctls = [
	I1009 18:21:58.304018   34792 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1009 18:21:58.304025   34792 command_runner.go:130] > ]
	I1009 18:21:58.304033   34792 command_runner.go:130] > # List of devices on the host that a
	I1009 18:21:58.304046   34792 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1009 18:21:58.304055   34792 command_runner.go:130] > # allowed_devices = [
	I1009 18:21:58.304063   34792 command_runner.go:130] > # 	"/dev/fuse",
	I1009 18:21:58.304071   34792 command_runner.go:130] > # 	"/dev/net/tun",
	I1009 18:21:58.304077   34792 command_runner.go:130] > # ]
	I1009 18:21:58.304088   34792 command_runner.go:130] > # List of additional devices. specified as
	I1009 18:21:58.304102   34792 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1009 18:21:58.304113   34792 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1009 18:21:58.304124   34792 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1009 18:21:58.304153   34792 command_runner.go:130] > # additional_devices = [
	I1009 18:21:58.304163   34792 command_runner.go:130] > # ]
	I1009 18:21:58.304172   34792 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1009 18:21:58.304182   34792 command_runner.go:130] > # cdi_spec_dirs = [
	I1009 18:21:58.304188   34792 command_runner.go:130] > # 	"/etc/cdi",
	I1009 18:21:58.304197   34792 command_runner.go:130] > # 	"/var/run/cdi",
	I1009 18:21:58.304202   34792 command_runner.go:130] > # ]
	I1009 18:21:58.304212   34792 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1009 18:21:58.304225   34792 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1009 18:21:58.304234   34792 command_runner.go:130] > # Defaults to false.
	I1009 18:21:58.304243   34792 command_runner.go:130] > # device_ownership_from_security_context = false
	I1009 18:21:58.304257   34792 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1009 18:21:58.304269   34792 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1009 18:21:58.304278   34792 command_runner.go:130] > # hooks_dir = [
	I1009 18:21:58.304287   34792 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1009 18:21:58.304294   34792 command_runner.go:130] > # ]
	I1009 18:21:58.304304   34792 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1009 18:21:58.304317   34792 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1009 18:21:58.304329   34792 command_runner.go:130] > # its default mounts from the following two files:
	I1009 18:21:58.304337   34792 command_runner.go:130] > #
	I1009 18:21:58.304347   34792 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1009 18:21:58.304361   34792 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1009 18:21:58.304382   34792 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1009 18:21:58.304389   34792 command_runner.go:130] > #
	I1009 18:21:58.304399   34792 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1009 18:21:58.304413   34792 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1009 18:21:58.304427   34792 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1009 18:21:58.304438   34792 command_runner.go:130] > #      only add mounts it finds in this file.
	I1009 18:21:58.304447   34792 command_runner.go:130] > #
	I1009 18:21:58.304455   34792 command_runner.go:130] > # default_mounts_file = ""
	I1009 18:21:58.304466   34792 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1009 18:21:58.304479   34792 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1009 18:21:58.304494   34792 command_runner.go:130] > # pids_limit = -1
	I1009 18:21:58.304508   34792 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1009 18:21:58.304521   34792 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1009 18:21:58.304532   34792 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1009 18:21:58.304547   34792 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1009 18:21:58.304557   34792 command_runner.go:130] > # log_size_max = -1
	I1009 18:21:58.304569   34792 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1009 18:21:58.304578   34792 command_runner.go:130] > # log_to_journald = false
	I1009 18:21:58.304601   34792 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1009 18:21:58.304614   34792 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1009 18:21:58.304622   34792 command_runner.go:130] > # Path to directory for container attach sockets.
	I1009 18:21:58.304634   34792 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1009 18:21:58.304647   34792 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1009 18:21:58.304657   34792 command_runner.go:130] > # bind_mount_prefix = ""
	I1009 18:21:58.304669   34792 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1009 18:21:58.304677   34792 command_runner.go:130] > # read_only = false
	I1009 18:21:58.304688   34792 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1009 18:21:58.304700   34792 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1009 18:21:58.304708   34792 command_runner.go:130] > # live configuration reload.
	I1009 18:21:58.304716   34792 command_runner.go:130] > # log_level = "info"
	I1009 18:21:58.304726   34792 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1009 18:21:58.304737   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.304746   34792 command_runner.go:130] > # log_filter = ""
	I1009 18:21:58.304761   34792 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1009 18:21:58.304773   34792 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1009 18:21:58.304781   34792 command_runner.go:130] > # separated by comma.
	I1009 18:21:58.304795   34792 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 18:21:58.304805   34792 command_runner.go:130] > # uid_mappings = ""
	I1009 18:21:58.304815   34792 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1009 18:21:58.304827   34792 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1009 18:21:58.304837   34792 command_runner.go:130] > # separated by comma.
	I1009 18:21:58.304849   34792 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 18:21:58.304863   34792 command_runner.go:130] > # gid_mappings = ""
	I1009 18:21:58.304890   34792 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1009 18:21:58.304904   34792 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1009 18:21:58.304916   34792 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1009 18:21:58.304929   34792 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 18:21:58.304939   34792 command_runner.go:130] > # minimum_mappable_uid = -1
	I1009 18:21:58.304949   34792 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1009 18:21:58.304961   34792 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1009 18:21:58.304971   34792 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1009 18:21:58.304986   34792 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 18:21:58.305032   34792 command_runner.go:130] > # minimum_mappable_gid = -1
	I1009 18:21:58.305045   34792 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1009 18:21:58.305054   34792 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1009 18:21:58.305063   34792 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1009 18:21:58.305074   34792 command_runner.go:130] > # ctr_stop_timeout = 30
	I1009 18:21:58.305084   34792 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1009 18:21:58.305097   34792 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1009 18:21:58.305106   34792 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1009 18:21:58.305116   34792 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1009 18:21:58.305124   34792 command_runner.go:130] > # drop_infra_ctr = true
	I1009 18:21:58.305148   34792 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1009 18:21:58.305162   34792 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1009 18:21:58.305177   34792 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1009 18:21:58.305185   34792 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1009 18:21:58.305197   34792 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1009 18:21:58.305209   34792 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1009 18:21:58.305222   34792 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1009 18:21:58.305233   34792 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1009 18:21:58.305241   34792 command_runner.go:130] > # shared_cpuset = ""
	I1009 18:21:58.305251   34792 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1009 18:21:58.305262   34792 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1009 18:21:58.305270   34792 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1009 18:21:58.305284   34792 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1009 18:21:58.305293   34792 command_runner.go:130] > # pinns_path = ""
	I1009 18:21:58.305305   34792 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1009 18:21:58.305318   34792 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1009 18:21:58.305328   34792 command_runner.go:130] > # enable_criu_support = true
	I1009 18:21:58.305337   34792 command_runner.go:130] > # Enable/disable the generation of the container,
	I1009 18:21:58.305350   34792 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1009 18:21:58.305359   34792 command_runner.go:130] > # enable_pod_events = false
	I1009 18:21:58.305371   34792 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1009 18:21:58.305382   34792 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1009 18:21:58.305389   34792 command_runner.go:130] > # default_runtime = "crun"
	I1009 18:21:58.305401   34792 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1009 18:21:58.305415   34792 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1009 18:21:58.305432   34792 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1009 18:21:58.305444   34792 command_runner.go:130] > # creation as a file is not desired either.
	I1009 18:21:58.305460   34792 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1009 18:21:58.305471   34792 command_runner.go:130] > # the hostname is being managed dynamically.
	I1009 18:21:58.305480   34792 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1009 18:21:58.305488   34792 command_runner.go:130] > # ]
	I1009 18:21:58.305499   34792 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1009 18:21:58.305512   34792 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1009 18:21:58.305524   34792 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1009 18:21:58.305535   34792 command_runner.go:130] > # Each entry in the table should follow the format:
	I1009 18:21:58.305542   34792 command_runner.go:130] > #
	I1009 18:21:58.305551   34792 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1009 18:21:58.305561   34792 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1009 18:21:58.305570   34792 command_runner.go:130] > # runtime_type = "oci"
	I1009 18:21:58.305582   34792 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1009 18:21:58.305590   34792 command_runner.go:130] > # inherit_default_runtime = false
	I1009 18:21:58.305601   34792 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1009 18:21:58.305611   34792 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1009 18:21:58.305619   34792 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1009 18:21:58.305628   34792 command_runner.go:130] > # monitor_env = []
	I1009 18:21:58.305638   34792 command_runner.go:130] > # privileged_without_host_devices = false
	I1009 18:21:58.305647   34792 command_runner.go:130] > # allowed_annotations = []
	I1009 18:21:58.305665   34792 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1009 18:21:58.305674   34792 command_runner.go:130] > # no_sync_log = false
	I1009 18:21:58.305681   34792 command_runner.go:130] > # default_annotations = {}
	I1009 18:21:58.305690   34792 command_runner.go:130] > # stream_websockets = false
	I1009 18:21:58.305697   34792 command_runner.go:130] > # seccomp_profile = ""
	I1009 18:21:58.305730   34792 command_runner.go:130] > # Where:
	I1009 18:21:58.305743   34792 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1009 18:21:58.305756   34792 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1009 18:21:58.305769   34792 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1009 18:21:58.305779   34792 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1009 18:21:58.305788   34792 command_runner.go:130] > #   in $PATH.
	I1009 18:21:58.305800   34792 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1009 18:21:58.305811   34792 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1009 18:21:58.305823   34792 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1009 18:21:58.305832   34792 command_runner.go:130] > #   state.
	I1009 18:21:58.305842   34792 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1009 18:21:58.305854   34792 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1009 18:21:58.305865   34792 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1009 18:21:58.305877   34792 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1009 18:21:58.305888   34792 command_runner.go:130] > #   the values from the default runtime on load time.
	I1009 18:21:58.305902   34792 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1009 18:21:58.305914   34792 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1009 18:21:58.305928   34792 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1009 18:21:58.305940   34792 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1009 18:21:58.305948   34792 command_runner.go:130] > #   The currently recognized values are:
	I1009 18:21:58.305962   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1009 18:21:58.305977   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1009 18:21:58.305989   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1009 18:21:58.306007   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1009 18:21:58.306022   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1009 18:21:58.306036   34792 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1009 18:21:58.306050   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1009 18:21:58.306061   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1009 18:21:58.306082   34792 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1009 18:21:58.306095   34792 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1009 18:21:58.306109   34792 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1009 18:21:58.306121   34792 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1009 18:21:58.306132   34792 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1009 18:21:58.306154   34792 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1009 18:21:58.306166   34792 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1009 18:21:58.306181   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1009 18:21:58.306194   34792 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1009 18:21:58.306204   34792 command_runner.go:130] > #   deprecated option "conmon".
	I1009 18:21:58.306216   34792 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1009 18:21:58.306226   34792 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1009 18:21:58.306240   34792 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1009 18:21:58.306250   34792 command_runner.go:130] > #   should be moved to the container's cgroup
	I1009 18:21:58.306260   34792 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1009 18:21:58.306271   34792 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1009 18:21:58.306285   34792 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1009 18:21:58.306294   34792 command_runner.go:130] > #   conmon-rs by using:
	I1009 18:21:58.306306   34792 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1009 18:21:58.306321   34792 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1009 18:21:58.306336   34792 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1009 18:21:58.306350   34792 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1009 18:21:58.306363   34792 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1009 18:21:58.306378   34792 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1009 18:21:58.306392   34792 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1009 18:21:58.306402   34792 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1009 18:21:58.306417   34792 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1009 18:21:58.306431   34792 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1009 18:21:58.306441   34792 command_runner.go:130] > #   when a machine crash happens.
	I1009 18:21:58.306452   34792 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1009 18:21:58.306467   34792 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1009 18:21:58.306481   34792 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1009 18:21:58.306492   34792 command_runner.go:130] > #   seccomp profile for the runtime.
	I1009 18:21:58.306506   34792 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1009 18:21:58.306520   34792 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1009 18:21:58.306525   34792 command_runner.go:130] > #
	I1009 18:21:58.306534   34792 command_runner.go:130] > # Using the seccomp notifier feature:
	I1009 18:21:58.306542   34792 command_runner.go:130] > #
	I1009 18:21:58.306552   34792 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1009 18:21:58.306565   34792 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1009 18:21:58.306574   34792 command_runner.go:130] > #
	I1009 18:21:58.306584   34792 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1009 18:21:58.306597   34792 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1009 18:21:58.306605   34792 command_runner.go:130] > #
	I1009 18:21:58.306615   34792 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1009 18:21:58.306623   34792 command_runner.go:130] > # feature.
	I1009 18:21:58.306629   34792 command_runner.go:130] > #
	I1009 18:21:58.306641   34792 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1009 18:21:58.306654   34792 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1009 18:21:58.306667   34792 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1009 18:21:58.306680   34792 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1009 18:21:58.306692   34792 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1009 18:21:58.306700   34792 command_runner.go:130] > #
	I1009 18:21:58.306710   34792 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1009 18:21:58.306723   34792 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1009 18:21:58.306730   34792 command_runner.go:130] > #
	I1009 18:21:58.306740   34792 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1009 18:21:58.306752   34792 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1009 18:21:58.306760   34792 command_runner.go:130] > #
	I1009 18:21:58.306770   34792 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1009 18:21:58.306782   34792 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1009 18:21:58.306788   34792 command_runner.go:130] > # limitation.
	I1009 18:21:58.306798   34792 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1009 18:21:58.306809   34792 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1009 18:21:58.306818   34792 command_runner.go:130] > runtime_type = ""
	I1009 18:21:58.306825   34792 command_runner.go:130] > runtime_root = "/run/crun"
	I1009 18:21:58.306837   34792 command_runner.go:130] > inherit_default_runtime = false
	I1009 18:21:58.306847   34792 command_runner.go:130] > runtime_config_path = ""
	I1009 18:21:58.306853   34792 command_runner.go:130] > container_min_memory = ""
	I1009 18:21:58.306863   34792 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1009 18:21:58.306870   34792 command_runner.go:130] > monitor_cgroup = "pod"
	I1009 18:21:58.306879   34792 command_runner.go:130] > monitor_exec_cgroup = ""
	I1009 18:21:58.306888   34792 command_runner.go:130] > allowed_annotations = [
	I1009 18:21:58.306898   34792 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1009 18:21:58.306904   34792 command_runner.go:130] > ]
	I1009 18:21:58.306914   34792 command_runner.go:130] > privileged_without_host_devices = false
	I1009 18:21:58.306921   34792 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1009 18:21:58.306931   34792 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1009 18:21:58.306937   34792 command_runner.go:130] > runtime_type = ""
	I1009 18:21:58.306944   34792 command_runner.go:130] > runtime_root = "/run/runc"
	I1009 18:21:58.306952   34792 command_runner.go:130] > inherit_default_runtime = false
	I1009 18:21:58.306962   34792 command_runner.go:130] > runtime_config_path = ""
	I1009 18:21:58.306970   34792 command_runner.go:130] > container_min_memory = ""
	I1009 18:21:58.306980   34792 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1009 18:21:58.306989   34792 command_runner.go:130] > monitor_cgroup = "pod"
	I1009 18:21:58.307006   34792 command_runner.go:130] > monitor_exec_cgroup = ""
	I1009 18:21:58.307017   34792 command_runner.go:130] > privileged_without_host_devices = false
	I1009 18:21:58.307031   34792 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1009 18:21:58.307040   34792 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1009 18:21:58.307053   34792 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1009 18:21:58.307068   34792 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1009 18:21:58.307088   34792 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1009 18:21:58.307107   34792 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1009 18:21:58.307121   34792 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1009 18:21:58.307130   34792 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1009 18:21:58.307160   34792 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1009 18:21:58.307179   34792 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1009 18:21:58.307192   34792 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1009 18:21:58.307206   34792 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1009 18:21:58.307215   34792 command_runner.go:130] > # Example:
	I1009 18:21:58.307224   34792 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1009 18:21:58.307234   34792 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1009 18:21:58.307244   34792 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1009 18:21:58.307253   34792 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1009 18:21:58.307262   34792 command_runner.go:130] > # cpuset = "0-1"
	I1009 18:21:58.307269   34792 command_runner.go:130] > # cpushares = "5"
	I1009 18:21:58.307278   34792 command_runner.go:130] > # cpuquota = "1000"
	I1009 18:21:58.307285   34792 command_runner.go:130] > # cpuperiod = "100000"
	I1009 18:21:58.307294   34792 command_runner.go:130] > # cpulimit = "35"
	I1009 18:21:58.307301   34792 command_runner.go:130] > # Where:
	I1009 18:21:58.307309   34792 command_runner.go:130] > # The workload name is workload-type.
	I1009 18:21:58.307323   34792 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1009 18:21:58.307336   34792 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1009 18:21:58.307349   34792 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1009 18:21:58.307365   34792 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1009 18:21:58.307377   34792 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1009 18:21:58.307388   34792 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1009 18:21:58.307399   34792 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1009 18:21:58.307410   34792 command_runner.go:130] > # Default value is set to true
	I1009 18:21:58.307418   34792 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1009 18:21:58.307430   34792 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1009 18:21:58.307440   34792 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1009 18:21:58.307449   34792 command_runner.go:130] > # Default value is set to 'false'
	I1009 18:21:58.307462   34792 command_runner.go:130] > # disable_hostport_mapping = false
	I1009 18:21:58.307474   34792 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1009 18:21:58.307487   34792 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1009 18:21:58.307495   34792 command_runner.go:130] > # timezone = ""
	I1009 18:21:58.307506   34792 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1009 18:21:58.307513   34792 command_runner.go:130] > #
	I1009 18:21:58.307523   34792 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1009 18:21:58.307536   34792 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1009 18:21:58.307544   34792 command_runner.go:130] > [crio.image]
	I1009 18:21:58.307556   34792 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1009 18:21:58.307566   34792 command_runner.go:130] > # default_transport = "docker://"
	I1009 18:21:58.307578   34792 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1009 18:21:58.307591   34792 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1009 18:21:58.307600   34792 command_runner.go:130] > # global_auth_file = ""
	I1009 18:21:58.307608   34792 command_runner.go:130] > # The image used to instantiate infra containers.
	I1009 18:21:58.307620   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.307630   34792 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1009 18:21:58.307641   34792 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1009 18:21:58.307654   34792 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1009 18:21:58.307665   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.307675   34792 command_runner.go:130] > # pause_image_auth_file = ""
	I1009 18:21:58.307686   34792 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1009 18:21:58.307698   34792 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1009 18:21:58.307708   34792 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1009 18:21:58.307719   34792 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1009 18:21:58.307727   34792 command_runner.go:130] > # pause_command = "/pause"
	I1009 18:21:58.307740   34792 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1009 18:21:58.307753   34792 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1009 18:21:58.307765   34792 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1009 18:21:58.307777   34792 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1009 18:21:58.307789   34792 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1009 18:21:58.307802   34792 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1009 18:21:58.307811   34792 command_runner.go:130] > # pinned_images = [
	I1009 18:21:58.307819   34792 command_runner.go:130] > # ]
	I1009 18:21:58.307830   34792 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1009 18:21:58.307842   34792 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1009 18:21:58.307855   34792 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1009 18:21:58.307868   34792 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1009 18:21:58.307879   34792 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1009 18:21:58.307887   34792 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1009 18:21:58.307899   34792 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1009 18:21:58.307912   34792 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1009 18:21:58.307930   34792 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1009 18:21:58.307943   34792 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1009 18:21:58.307955   34792 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1009 18:21:58.307971   34792 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1009 18:21:58.307982   34792 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1009 18:21:58.308001   34792 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1009 18:21:58.308010   34792 command_runner.go:130] > # changing them here.
	I1009 18:21:58.308020   34792 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1009 18:21:58.308029   34792 command_runner.go:130] > # insecure_registries = [
	I1009 18:21:58.308035   34792 command_runner.go:130] > # ]
	I1009 18:21:58.308049   34792 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1009 18:21:58.308059   34792 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1009 18:21:58.308067   34792 command_runner.go:130] > # image_volumes = "mkdir"
	I1009 18:21:58.308079   34792 command_runner.go:130] > # Temporary directory to use for storing big files
	I1009 18:21:58.308089   34792 command_runner.go:130] > # big_files_temporary_dir = ""
	I1009 18:21:58.308100   34792 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1009 18:21:58.308114   34792 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1009 18:21:58.308123   34792 command_runner.go:130] > # auto_reload_registries = false
	I1009 18:21:58.308133   34792 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1009 18:21:58.308163   34792 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1009 18:21:58.308174   34792 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1009 18:21:58.308183   34792 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1009 18:21:58.308191   34792 command_runner.go:130] > # The mode of short name resolution.
	I1009 18:21:58.308205   34792 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1009 18:21:58.308219   34792 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1009 18:21:58.308230   34792 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1009 18:21:58.308238   34792 command_runner.go:130] > # short_name_mode = "enforcing"
	I1009 18:21:58.308250   34792 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1009 18:21:58.308261   34792 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1009 18:21:58.308271   34792 command_runner.go:130] > # oci_artifact_mount_support = true
	I1009 18:21:58.308282   34792 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1009 18:21:58.308291   34792 command_runner.go:130] > # CNI plugins.
	I1009 18:21:58.308297   34792 command_runner.go:130] > [crio.network]
	I1009 18:21:58.308312   34792 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1009 18:21:58.308324   34792 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1009 18:21:58.308334   34792 command_runner.go:130] > # cni_default_network = ""
	I1009 18:21:58.308345   34792 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1009 18:21:58.308355   34792 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1009 18:21:58.308365   34792 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1009 18:21:58.308373   34792 command_runner.go:130] > # plugin_dirs = [
	I1009 18:21:58.308380   34792 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1009 18:21:58.308388   34792 command_runner.go:130] > # ]
	I1009 18:21:58.308395   34792 command_runner.go:130] > # List of included pod metrics.
	I1009 18:21:58.308404   34792 command_runner.go:130] > # included_pod_metrics = [
	I1009 18:21:58.308411   34792 command_runner.go:130] > # ]
	I1009 18:21:58.308423   34792 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1009 18:21:58.308429   34792 command_runner.go:130] > [crio.metrics]
	I1009 18:21:58.308440   34792 command_runner.go:130] > # Globally enable or disable metrics support.
	I1009 18:21:58.308447   34792 command_runner.go:130] > # enable_metrics = false
	I1009 18:21:58.308457   34792 command_runner.go:130] > # Specify enabled metrics collectors.
	I1009 18:21:58.308466   34792 command_runner.go:130] > # Per default all metrics are enabled.
	I1009 18:21:58.308479   34792 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1009 18:21:58.308492   34792 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1009 18:21:58.308504   34792 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1009 18:21:58.308514   34792 command_runner.go:130] > # metrics_collectors = [
	I1009 18:21:58.308520   34792 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1009 18:21:58.308525   34792 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1009 18:21:58.308530   34792 command_runner.go:130] > # 	"containers_oom_total",
	I1009 18:21:58.308535   34792 command_runner.go:130] > # 	"processes_defunct",
	I1009 18:21:58.308540   34792 command_runner.go:130] > # 	"operations_total",
	I1009 18:21:58.308546   34792 command_runner.go:130] > # 	"operations_latency_seconds",
	I1009 18:21:58.308553   34792 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1009 18:21:58.308560   34792 command_runner.go:130] > # 	"operations_errors_total",
	I1009 18:21:58.308567   34792 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1009 18:21:58.308574   34792 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1009 18:21:58.308581   34792 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1009 18:21:58.308590   34792 command_runner.go:130] > # 	"image_pulls_success_total",
	I1009 18:21:58.308598   34792 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1009 18:21:58.308605   34792 command_runner.go:130] > # 	"containers_oom_count_total",
	I1009 18:21:58.308613   34792 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1009 18:21:58.308620   34792 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1009 18:21:58.308630   34792 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1009 18:21:58.308635   34792 command_runner.go:130] > # ]
	I1009 18:21:58.308646   34792 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1009 18:21:58.308656   34792 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1009 18:21:58.308664   34792 command_runner.go:130] > # The port on which the metrics server will listen.
	I1009 18:21:58.308673   34792 command_runner.go:130] > # metrics_port = 9090
	I1009 18:21:58.308682   34792 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1009 18:21:58.308691   34792 command_runner.go:130] > # metrics_socket = ""
	I1009 18:21:58.308699   34792 command_runner.go:130] > # The certificate for the secure metrics server.
	I1009 18:21:58.308713   34792 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1009 18:21:58.308726   34792 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1009 18:21:58.308736   34792 command_runner.go:130] > # certificate on any modification event.
	I1009 18:21:58.308743   34792 command_runner.go:130] > # metrics_cert = ""
	I1009 18:21:58.308754   34792 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1009 18:21:58.308765   34792 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1009 18:21:58.308774   34792 command_runner.go:130] > # metrics_key = ""
	I1009 18:21:58.308785   34792 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1009 18:21:58.308793   34792 command_runner.go:130] > [crio.tracing]
	I1009 18:21:58.308803   34792 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1009 18:21:58.308812   34792 command_runner.go:130] > # enable_tracing = false
	I1009 18:21:58.308821   34792 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1009 18:21:58.308831   34792 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1009 18:21:58.308842   34792 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1009 18:21:58.308854   34792 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1009 18:21:58.308864   34792 command_runner.go:130] > # CRI-O NRI configuration.
	I1009 18:21:58.308871   34792 command_runner.go:130] > [crio.nri]
	I1009 18:21:58.308879   34792 command_runner.go:130] > # Globally enable or disable NRI.
	I1009 18:21:58.308888   34792 command_runner.go:130] > # enable_nri = true
	I1009 18:21:58.308908   34792 command_runner.go:130] > # NRI socket to listen on.
	I1009 18:21:58.308919   34792 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1009 18:21:58.308926   34792 command_runner.go:130] > # NRI plugin directory to use.
	I1009 18:21:58.308934   34792 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1009 18:21:58.308945   34792 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1009 18:21:58.308955   34792 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1009 18:21:58.308967   34792 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1009 18:21:58.309020   34792 command_runner.go:130] > # nri_disable_connections = false
	I1009 18:21:58.309031   34792 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1009 18:21:58.309039   34792 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1009 18:21:58.309050   34792 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1009 18:21:58.309060   34792 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1009 18:21:58.309070   34792 command_runner.go:130] > # NRI default validator configuration.
	I1009 18:21:58.309081   34792 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1009 18:21:58.309094   34792 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1009 18:21:58.309105   34792 command_runner.go:130] > # can be restricted/rejected:
	I1009 18:21:58.309114   34792 command_runner.go:130] > # - OCI hook injection
	I1009 18:21:58.309123   34792 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1009 18:21:58.309144   34792 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1009 18:21:58.309154   34792 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1009 18:21:58.309164   34792 command_runner.go:130] > # - adjustment of linux namespaces
	I1009 18:21:58.309174   34792 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1009 18:21:58.309187   34792 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1009 18:21:58.309199   34792 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1009 18:21:58.309206   34792 command_runner.go:130] > #
	I1009 18:21:58.309213   34792 command_runner.go:130] > # [crio.nri.default_validator]
	I1009 18:21:58.309228   34792 command_runner.go:130] > # nri_enable_default_validator = false
	I1009 18:21:58.309239   34792 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1009 18:21:58.309249   34792 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1009 18:21:58.309259   34792 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1009 18:21:58.309270   34792 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1009 18:21:58.309282   34792 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1009 18:21:58.309292   34792 command_runner.go:130] > # nri_validator_required_plugins = [
	I1009 18:21:58.309300   34792 command_runner.go:130] > # ]
	I1009 18:21:58.309310   34792 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1009 18:21:58.309320   34792 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1009 18:21:58.309329   34792 command_runner.go:130] > [crio.stats]
	I1009 18:21:58.309338   34792 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1009 18:21:58.309350   34792 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1009 18:21:58.309361   34792 command_runner.go:130] > # stats_collection_period = 0
	I1009 18:21:58.309373   34792 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1009 18:21:58.309386   34792 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1009 18:21:58.309395   34792 command_runner.go:130] > # collection_period = 0
	I1009 18:21:58.309439   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.287848676Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1009 18:21:58.309455   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.287874416Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1009 18:21:58.309486   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.28789246Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1009 18:21:58.309504   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.287909281Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1009 18:21:58.309520   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.287966347Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:58.309548   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.288147535Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1009 18:21:58.309568   34792 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1009 18:21:58.309652   34792 cni.go:84] Creating CNI manager for ""
	I1009 18:21:58.309667   34792 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:21:58.309686   34792 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:21:58.309718   34792 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753440 NodeName:functional-753440 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:21:58.309867   34792 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-753440"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:21:58.309941   34792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:21:58.317943   34792 command_runner.go:130] > kubeadm
	I1009 18:21:58.317964   34792 command_runner.go:130] > kubectl
	I1009 18:21:58.317972   34792 command_runner.go:130] > kubelet
	I1009 18:21:58.317992   34792 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:21:58.318041   34792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:21:58.325700   34792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 18:21:58.338455   34792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:21:58.350701   34792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1009 18:21:58.362930   34792 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 18:21:58.366724   34792 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1009 18:21:58.366809   34792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:21:58.451602   34792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:21:58.464478   34792 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440 for IP: 192.168.49.2
	I1009 18:21:58.464503   34792 certs.go:195] generating shared ca certs ...
	I1009 18:21:58.464518   34792 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:21:58.464657   34792 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 18:21:58.464699   34792 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 18:21:58.464708   34792 certs.go:257] generating profile certs ...
	I1009 18:21:58.464789   34792 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.key
	I1009 18:21:58.464832   34792 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.key.01289d3a
	I1009 18:21:58.464870   34792 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.key
	I1009 18:21:58.464880   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 18:21:58.464891   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 18:21:58.464904   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 18:21:58.464914   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 18:21:58.464926   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 18:21:58.464938   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 18:21:58.464950   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 18:21:58.464961   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 18:21:58.465007   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 18:21:58.465033   34792 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 18:21:58.465040   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:21:58.465060   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 18:21:58.465083   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:21:58.465117   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 18:21:58.465182   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:21:58.465212   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem -> /usr/share/ca-certificates/14880.pem
	I1009 18:21:58.465226   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /usr/share/ca-certificates/148802.pem
	I1009 18:21:58.465252   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:21:58.465730   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:21:58.483386   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:21:58.500383   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:21:58.517315   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:21:58.533903   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 18:21:58.550845   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:21:58.567242   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:21:58.584667   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:21:58.601626   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 18:21:58.618749   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 18:21:58.635789   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:21:58.652270   34792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:21:58.664508   34792 ssh_runner.go:195] Run: openssl version
	I1009 18:21:58.670569   34792 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1009 18:21:58.670643   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:21:58.679189   34792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:21:58.683037   34792 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:21:58.683067   34792 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:21:58.683111   34792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:21:58.716325   34792 command_runner.go:130] > b5213941
	I1009 18:21:58.716574   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:21:58.724647   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 18:21:58.732750   34792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 18:21:58.736237   34792 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:21:58.736342   34792 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:21:58.736392   34792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 18:21:58.769488   34792 command_runner.go:130] > 51391683
	I1009 18:21:58.769675   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 18:21:58.778213   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 18:21:58.786758   34792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 18:21:58.790431   34792 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:21:58.790472   34792 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:21:58.790516   34792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 18:21:58.824579   34792 command_runner.go:130] > 3ec20f2e
	I1009 18:21:58.824670   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:21:58.832975   34792 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:21:58.836722   34792 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:21:58.836745   34792 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1009 18:21:58.836750   34792 command_runner.go:130] > Device: 8,1	Inode: 583629      Links: 1
	I1009 18:21:58.836756   34792 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 18:21:58.836762   34792 command_runner.go:130] > Access: 2025-10-09 18:17:52.024667536 +0000
	I1009 18:21:58.836766   34792 command_runner.go:130] > Modify: 2025-10-09 18:13:46.346674317 +0000
	I1009 18:21:58.836771   34792 command_runner.go:130] > Change: 2025-10-09 18:13:46.346674317 +0000
	I1009 18:21:58.836775   34792 command_runner.go:130] >  Birth: 2025-10-09 18:13:46.346674317 +0000
	I1009 18:21:58.836829   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 18:21:58.871297   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:58.871384   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 18:21:58.905951   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:58.906293   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 18:21:58.941072   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:58.941180   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 18:21:58.975637   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:58.975713   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 18:21:59.010686   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:59.010763   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 18:21:59.045288   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:59.045372   34792 kubeadm.go:400] StartCluster: {Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:21:59.045468   34792 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:21:59.045548   34792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:21:59.072734   34792 cri.go:89] found id: ""
	I1009 18:21:59.072811   34792 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:21:59.080291   34792 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1009 18:21:59.080312   34792 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1009 18:21:59.080317   34792 command_runner.go:130] > /var/lib/minikube/etcd:
	I1009 18:21:59.080960   34792 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 18:21:59.080977   34792 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 18:21:59.081028   34792 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 18:21:59.088791   34792 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:21:59.088891   34792 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-753440" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:21:59.088923   34792 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-11374/kubeconfig needs updating (will repair): [kubeconfig missing "functional-753440" cluster setting kubeconfig missing "functional-753440" context setting]
	I1009 18:21:59.089226   34792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/kubeconfig: {Name:mke7bf8fc0811179129dfd61e3a963860adf8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:21:59.115972   34792 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:21:59.116113   34792 kapi.go:59] client config for functional-753440: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 18:21:59.116551   34792 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 18:21:59.116565   34792 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 18:21:59.116570   34792 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 18:21:59.116574   34792 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 18:21:59.116578   34792 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 18:21:59.116681   34792 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 18:21:59.116939   34792 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 18:21:59.125251   34792 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 18:21:59.125284   34792 kubeadm.go:601] duration metric: took 44.302105ms to restartPrimaryControlPlane
	I1009 18:21:59.125294   34792 kubeadm.go:402] duration metric: took 79.928873ms to StartCluster
	I1009 18:21:59.125313   34792 settings.go:142] acquiring lock: {Name:mke1fc24bd3c282bdce5b5999d4611ed242ead0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:21:59.125417   34792 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:21:59.125977   34792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/kubeconfig: {Name:mke7bf8fc0811179129dfd61e3a963860adf8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:21:59.126266   34792 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:21:59.126330   34792 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 18:21:59.126472   34792 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:21:59.126485   34792 addons.go:69] Setting default-storageclass=true in profile "functional-753440"
	I1009 18:21:59.126503   34792 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-753440"
	I1009 18:21:59.126475   34792 addons.go:69] Setting storage-provisioner=true in profile "functional-753440"
	I1009 18:21:59.126533   34792 addons.go:238] Setting addon storage-provisioner=true in "functional-753440"
	I1009 18:21:59.126575   34792 host.go:66] Checking if "functional-753440" exists ...
	I1009 18:21:59.126787   34792 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
	I1009 18:21:59.126953   34792 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
	I1009 18:21:59.129433   34792 out.go:179] * Verifying Kubernetes components...
	I1009 18:21:59.130694   34792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:21:59.147348   34792 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:21:59.147489   34792 kapi.go:59] client config for functional-753440: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 18:21:59.147681   34792 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 18:21:59.147763   34792 addons.go:238] Setting addon default-storageclass=true in "functional-753440"
	I1009 18:21:59.147799   34792 host.go:66] Checking if "functional-753440" exists ...
	I1009 18:21:59.148103   34792 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
	I1009 18:21:59.149131   34792 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:21:59.149169   34792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 18:21:59.149223   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:59.172020   34792 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 18:21:59.172047   34792 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 18:21:59.172108   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:59.172953   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:59.190936   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:59.227445   34792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:21:59.240811   34792 node_ready.go:35] waiting up to 6m0s for node "functional-753440" to be "Ready" ...
	I1009 18:21:59.240954   34792 type.go:168] "Request Body" body=""
	I1009 18:21:59.241028   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:21:59.241430   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:21:59.284375   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:21:59.300190   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:21:59.338559   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.338609   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.338653   34792 retry.go:31] will retry after 183.514108ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.353053   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.353121   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.353157   34792 retry.go:31] will retry after 252.751171ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.522422   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:21:59.573424   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.575988   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.576058   34792 retry.go:31] will retry after 293.779687ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.606194   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:21:59.660438   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.660484   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.660501   34792 retry.go:31] will retry after 279.387954ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.741722   34792 type.go:168] "Request Body" body=""
	I1009 18:21:59.741829   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:21:59.742206   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:21:59.870497   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:21:59.921333   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.923563   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.923589   34792 retry.go:31] will retry after 737.997993ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.940822   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:21:59.989898   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.992209   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.992239   34792 retry.go:31] will retry after 533.533276ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:00.241740   34792 type.go:168] "Request Body" body=""
	I1009 18:22:00.241807   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:00.242177   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:00.526746   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:00.575738   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:00.578103   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:00.578131   34792 retry.go:31] will retry after 930.387704ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:00.662455   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:00.715389   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:00.715427   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:00.715452   34792 retry.go:31] will retry after 867.874306ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:00.741572   34792 type.go:168] "Request Body" body=""
	I1009 18:22:00.741637   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:00.741979   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:01.241687   34792 type.go:168] "Request Body" body=""
	I1009 18:22:01.241751   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:01.242091   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:01.242159   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:01.509541   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:01.558188   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:01.560577   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:01.560605   34792 retry.go:31] will retry after 1.199996419s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:01.583824   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:01.634758   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:01.634811   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:01.634834   34792 retry.go:31] will retry after 674.661756ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:01.741022   34792 type.go:168] "Request Body" body=""
	I1009 18:22:01.741106   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:01.741428   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:02.241242   34792 type.go:168] "Request Body" body=""
	I1009 18:22:02.241329   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:02.241689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:02.309923   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:02.359167   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:02.361481   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:02.361513   34792 retry.go:31] will retry after 1.255051156s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:02.741014   34792 type.go:168] "Request Body" body=""
	I1009 18:22:02.741086   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:02.741469   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:02.761694   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:02.809418   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:02.811709   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:02.811735   34792 retry.go:31] will retry after 2.010356843s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:03.241312   34792 type.go:168] "Request Body" body=""
	I1009 18:22:03.241377   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:03.241665   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:03.617237   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:03.670575   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:03.670619   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:03.670643   34792 retry.go:31] will retry after 3.029315393s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:03.741894   34792 type.go:168] "Request Body" body=""
	I1009 18:22:03.741959   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:03.742307   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:03.742368   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:04.241167   34792 type.go:168] "Request Body" body=""
	I1009 18:22:04.241255   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:04.241616   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:04.741405   34792 type.go:168] "Request Body" body=""
	I1009 18:22:04.741470   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:04.741793   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:04.823125   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:04.874252   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:04.876942   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:04.876977   34792 retry.go:31] will retry after 2.337146666s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:05.241523   34792 type.go:168] "Request Body" body=""
	I1009 18:22:05.241603   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:05.241925   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:05.741876   34792 type.go:168] "Request Body" body=""
	I1009 18:22:05.741944   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:05.742306   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:06.241056   34792 type.go:168] "Request Body" body=""
	I1009 18:22:06.241120   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:06.241524   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:06.241591   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:06.701185   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:06.741960   34792 type.go:168] "Request Body" body=""
	I1009 18:22:06.742030   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:06.742348   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:06.753588   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:06.753625   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:06.753645   34792 retry.go:31] will retry after 5.067292314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:07.214286   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:07.241989   34792 type.go:168] "Request Body" body=""
	I1009 18:22:07.242085   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:07.242465   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:07.267576   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:07.267619   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:07.267638   34792 retry.go:31] will retry after 3.639407023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:07.741211   34792 type.go:168] "Request Body" body=""
	I1009 18:22:07.741279   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:07.741611   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:08.241376   34792 type.go:168] "Request Body" body=""
	I1009 18:22:08.241468   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:08.241797   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:08.241859   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:08.741654   34792 type.go:168] "Request Body" body=""
	I1009 18:22:08.741723   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:08.742130   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:09.241911   34792 type.go:168] "Request Body" body=""
	I1009 18:22:09.241978   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:09.242356   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:09.742012   34792 type.go:168] "Request Body" body=""
	I1009 18:22:09.742100   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:09.742487   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:10.241171   34792 type.go:168] "Request Body" body=""
	I1009 18:22:10.241238   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:10.241608   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:10.741552   34792 type.go:168] "Request Body" body=""
	I1009 18:22:10.741634   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:10.741987   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:10.742077   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:10.907343   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:10.958356   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:10.960749   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:10.960774   34792 retry.go:31] will retry after 7.184910667s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:11.241202   34792 type.go:168] "Request Body" body=""
	I1009 18:22:11.241304   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:11.241646   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:11.741253   34792 type.go:168] "Request Body" body=""
	I1009 18:22:11.741393   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:11.741703   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:11.821955   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:11.870785   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:11.873227   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:11.873260   34792 retry.go:31] will retry after 9.534535371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:12.241850   34792 type.go:168] "Request Body" body=""
	I1009 18:22:12.241915   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:12.242244   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:12.741040   34792 type.go:168] "Request Body" body=""
	I1009 18:22:12.741121   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:12.741476   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:13.241242   34792 type.go:168] "Request Body" body=""
	I1009 18:22:13.241344   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:13.241681   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:13.241752   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:13.741448   34792 type.go:168] "Request Body" body=""
	I1009 18:22:13.741557   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:13.741881   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:14.241703   34792 type.go:168] "Request Body" body=""
	I1009 18:22:14.241767   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:14.242071   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:14.741971   34792 type.go:168] "Request Body" body=""
	I1009 18:22:14.742058   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:14.742415   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:15.241162   34792 type.go:168] "Request Body" body=""
	I1009 18:22:15.241227   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:15.241543   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:15.741329   34792 type.go:168] "Request Body" body=""
	I1009 18:22:15.741396   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:15.741713   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:15.741779   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:16.241461   34792 type.go:168] "Request Body" body=""
	I1009 18:22:16.241527   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:16.241841   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:16.741694   34792 type.go:168] "Request Body" body=""
	I1009 18:22:16.741756   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:16.742072   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:17.241938   34792 type.go:168] "Request Body" body=""
	I1009 18:22:17.242012   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:17.242354   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:17.741119   34792 type.go:168] "Request Body" body=""
	I1009 18:22:17.741209   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:17.741520   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:18.146014   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:18.197672   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:18.200076   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:18.200108   34792 retry.go:31] will retry after 13.416592948s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:18.241338   34792 type.go:168] "Request Body" body=""
	I1009 18:22:18.241421   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:18.241742   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:18.241815   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:18.741635   34792 type.go:168] "Request Body" body=""
	I1009 18:22:18.741716   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:18.742048   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:19.241915   34792 type.go:168] "Request Body" body=""
	I1009 18:22:19.241986   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:19.242351   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:19.741113   34792 type.go:168] "Request Body" body=""
	I1009 18:22:19.741223   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:19.741558   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:20.241266   34792 type.go:168] "Request Body" body=""
	I1009 18:22:20.241372   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:20.241689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:20.741538   34792 type.go:168] "Request Body" body=""
	I1009 18:22:20.741648   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:20.742078   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:20.742168   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:21.241982   34792 type.go:168] "Request Body" body=""
	I1009 18:22:21.242072   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:21.242428   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:21.408800   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:21.460386   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:21.460443   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:21.460465   34792 retry.go:31] will retry after 6.196258431s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:21.741894   34792 type.go:168] "Request Body" body=""
	I1009 18:22:21.741973   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:21.742340   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:22.241109   34792 type.go:168] "Request Body" body=""
	I1009 18:22:22.241216   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:22.241540   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:22.741267   34792 type.go:168] "Request Body" body=""
	I1009 18:22:22.741362   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:22.741668   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:23.241400   34792 type.go:168] "Request Body" body=""
	I1009 18:22:23.241466   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:23.241777   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:23.241839   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:23.741636   34792 type.go:168] "Request Body" body=""
	I1009 18:22:23.741720   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:23.742032   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:24.241849   34792 type.go:168] "Request Body" body=""
	I1009 18:22:24.241912   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:24.242229   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:24.740969   34792 type.go:168] "Request Body" body=""
	I1009 18:22:24.741034   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:24.741359   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:25.241097   34792 type.go:168] "Request Body" body=""
	I1009 18:22:25.241186   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:25.241506   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:25.741317   34792 type.go:168] "Request Body" body=""
	I1009 18:22:25.741384   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:25.741717   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:25.741785   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:26.241467   34792 type.go:168] "Request Body" body=""
	I1009 18:22:26.241530   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:26.241836   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:26.741641   34792 type.go:168] "Request Body" body=""
	I1009 18:22:26.741717   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:26.742054   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:27.241867   34792 type.go:168] "Request Body" body=""
	I1009 18:22:27.241935   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:27.242289   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:27.657912   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:27.709732   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:27.709776   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:27.709796   34792 retry.go:31] will retry after 21.104663041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:27.741976   34792 type.go:168] "Request Body" body=""
	I1009 18:22:27.742060   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:27.742387   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:27.742447   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:28.241206   34792 type.go:168] "Request Body" body=""
	I1009 18:22:28.241272   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:28.241641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:28.741374   34792 type.go:168] "Request Body" body=""
	I1009 18:22:28.741445   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:28.741741   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:29.241532   34792 type.go:168] "Request Body" body=""
	I1009 18:22:29.241600   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:29.241930   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:29.741720   34792 type.go:168] "Request Body" body=""
	I1009 18:22:29.741782   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:29.742115   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:30.241968   34792 type.go:168] "Request Body" body=""
	I1009 18:22:30.242038   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:30.242354   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:30.242406   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:30.741168   34792 type.go:168] "Request Body" body=""
	I1009 18:22:30.741235   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:30.741522   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:31.241253   34792 type.go:168] "Request Body" body=""
	I1009 18:22:31.241332   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:31.241693   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:31.617269   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:31.669784   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:31.669834   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:31.669851   34792 retry.go:31] will retry after 15.154475243s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:31.740998   34792 type.go:168] "Request Body" body=""
	I1009 18:22:31.741063   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:31.741420   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:32.241118   34792 type.go:168] "Request Body" body=""
	I1009 18:22:32.241207   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:32.241526   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:32.741162   34792 type.go:168] "Request Body" body=""
	I1009 18:22:32.741230   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:32.741578   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:32.741636   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:33.241206   34792 type.go:168] "Request Body" body=""
	I1009 18:22:33.241273   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:33.241600   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:33.741209   34792 type.go:168] "Request Body" body=""
	I1009 18:22:33.741274   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:33.741593   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:34.241252   34792 type.go:168] "Request Body" body=""
	I1009 18:22:34.241319   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:34.241629   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:34.741297   34792 type.go:168] "Request Body" body=""
	I1009 18:22:34.741366   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:34.741662   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:34.741714   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:35.241258   34792 type.go:168] "Request Body" body=""
	I1009 18:22:35.241319   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:35.241631   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:35.741518   34792 type.go:168] "Request Body" body=""
	I1009 18:22:35.741590   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:35.741908   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:36.241473   34792 type.go:168] "Request Body" body=""
	I1009 18:22:36.241537   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:36.241867   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:36.741507   34792 type.go:168] "Request Body" body=""
	I1009 18:22:36.741582   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:36.741900   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:36.741954   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:37.241503   34792 type.go:168] "Request Body" body=""
	I1009 18:22:37.241570   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:37.241880   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:37.741492   34792 type.go:168] "Request Body" body=""
	I1009 18:22:37.741564   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:37.741883   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:38.241508   34792 type.go:168] "Request Body" body=""
	I1009 18:22:38.241573   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:38.241878   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:38.741474   34792 type.go:168] "Request Body" body=""
	I1009 18:22:38.741571   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:38.741868   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:39.241856   34792 type.go:168] "Request Body" body=""
	I1009 18:22:39.241916   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:39.242237   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:39.242300   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:39.741898   34792 type.go:168] "Request Body" body=""
	I1009 18:22:39.741969   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:39.742303   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:40.241969   34792 type.go:168] "Request Body" body=""
	I1009 18:22:40.242062   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:40.242400   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:40.741170   34792 type.go:168] "Request Body" body=""
	I1009 18:22:40.741238   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:40.741556   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:41.241169   34792 type.go:168] "Request Body" body=""
	I1009 18:22:41.241235   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:41.241568   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:41.741187   34792 type.go:168] "Request Body" body=""
	I1009 18:22:41.741253   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:41.741589   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:41.741643   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:42.241206   34792 type.go:168] "Request Body" body=""
	I1009 18:22:42.241272   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:42.241611   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:42.741205   34792 type.go:168] "Request Body" body=""
	I1009 18:22:42.741278   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:42.741595   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:43.241190   34792 type.go:168] "Request Body" body=""
	I1009 18:22:43.241258   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:43.241582   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:43.741198   34792 type.go:168] "Request Body" body=""
	I1009 18:22:43.741263   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:43.741575   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:44.241202   34792 type.go:168] "Request Body" body=""
	I1009 18:22:44.241263   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:44.241577   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:44.241629   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:44.741212   34792 type.go:168] "Request Body" body=""
	I1009 18:22:44.741283   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:44.741598   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:45.241235   34792 type.go:168] "Request Body" body=""
	I1009 18:22:45.241301   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:45.241671   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:45.741562   34792 type.go:168] "Request Body" body=""
	I1009 18:22:45.741629   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:45.741942   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:46.241628   34792 type.go:168] "Request Body" body=""
	I1009 18:22:46.241692   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:46.241993   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:46.242063   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:46.741676   34792 type.go:168] "Request Body" body=""
	I1009 18:22:46.741745   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:46.742077   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:46.825331   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:46.875678   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:46.878302   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:46.878331   34792 retry.go:31] will retry after 24.753743157s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:47.241842   34792 type.go:168] "Request Body" body=""
	I1009 18:22:47.241915   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:47.242245   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:47.741025   34792 type.go:168] "Request Body" body=""
	I1009 18:22:47.741128   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:47.741463   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:48.241206   34792 type.go:168] "Request Body" body=""
	I1009 18:22:48.241284   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:48.241641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:48.741361   34792 type.go:168] "Request Body" body=""
	I1009 18:22:48.741434   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:48.741764   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:48.741814   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:48.815023   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:48.866903   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:48.866953   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:48.866975   34792 retry.go:31] will retry after 23.693621864s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:49.241681   34792 type.go:168] "Request Body" body=""
	I1009 18:22:49.241760   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:49.242189   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:49.741809   34792 type.go:168] "Request Body" body=""
	I1009 18:22:49.741872   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:49.742216   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:50.241969   34792 type.go:168] "Request Body" body=""
	I1009 18:22:50.242049   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:50.242406   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:50.741244   34792 type.go:168] "Request Body" body=""
	I1009 18:22:50.741312   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:50.741658   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:51.241250   34792 type.go:168] "Request Body" body=""
	I1009 18:22:51.241336   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:51.241653   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:51.241707   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:51.741250   34792 type.go:168] "Request Body" body=""
	I1009 18:22:51.741317   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:51.741731   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:52.241243   34792 type.go:168] "Request Body" body=""
	I1009 18:22:52.241341   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:52.241668   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:52.741254   34792 type.go:168] "Request Body" body=""
	I1009 18:22:52.741378   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:52.741687   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:53.241293   34792 type.go:168] "Request Body" body=""
	I1009 18:22:53.241355   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:53.241674   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:53.241725   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:53.741263   34792 type.go:168] "Request Body" body=""
	I1009 18:22:53.741330   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:53.741640   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:54.241249   34792 type.go:168] "Request Body" body=""
	I1009 18:22:54.241329   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:54.241652   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:54.741260   34792 type.go:168] "Request Body" body=""
	I1009 18:22:54.741337   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:54.741654   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:55.241278   34792 type.go:168] "Request Body" body=""
	I1009 18:22:55.241342   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:55.241675   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:55.741565   34792 type.go:168] "Request Body" body=""
	I1009 18:22:55.741632   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:55.741942   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:55.741993   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:56.241590   34792 type.go:168] "Request Body" body=""
	I1009 18:22:56.241657   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:56.241967   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:56.741618   34792 type.go:168] "Request Body" body=""
	I1009 18:22:56.741686   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:56.742001   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:57.241690   34792 type.go:168] "Request Body" body=""
	I1009 18:22:57.241747   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:57.242085   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:57.741794   34792 type.go:168] "Request Body" body=""
	I1009 18:22:57.741866   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:57.742231   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:57.742290   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:58.241896   34792 type.go:168] "Request Body" body=""
	I1009 18:22:58.241964   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:58.242341   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:58.740987   34792 type.go:168] "Request Body" body=""
	I1009 18:22:58.741057   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:58.741430   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:59.241270   34792 type.go:168] "Request Body" body=""
	I1009 18:22:59.241374   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:59.241705   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:59.741305   34792 type.go:168] "Request Body" body=""
	I1009 18:22:59.741378   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:59.741671   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:00.241318   34792 type.go:168] "Request Body" body=""
	I1009 18:23:00.241386   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:00.241730   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:00.241783   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:00.741584   34792 type.go:168] "Request Body" body=""
	I1009 18:23:00.741655   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:00.741970   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:01.241670   34792 type.go:168] "Request Body" body=""
	I1009 18:23:01.241740   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:01.242056   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:01.741725   34792 type.go:168] "Request Body" body=""
	I1009 18:23:01.741789   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:01.742109   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:02.241790   34792 type.go:168] "Request Body" body=""
	I1009 18:23:02.241853   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:02.242215   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:02.242270   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:02.741914   34792 type.go:168] "Request Body" body=""
	I1009 18:23:02.741984   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:02.742352   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:03.242008   34792 type.go:168] "Request Body" body=""
	I1009 18:23:03.242088   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:03.242455   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:03.741186   34792 type.go:168] "Request Body" body=""
	I1009 18:23:03.741250   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:03.741576   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:04.241269   34792 type.go:168] "Request Body" body=""
	I1009 18:23:04.241341   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:04.241673   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:04.741396   34792 type.go:168] "Request Body" body=""
	I1009 18:23:04.741460   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:04.741772   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:04.741828   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:05.241582   34792 type.go:168] "Request Body" body=""
	I1009 18:23:05.241646   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:05.241956   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:05.741882   34792 type.go:168] "Request Body" body=""
	I1009 18:23:05.741951   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:05.742320   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:06.241065   34792 type.go:168] "Request Body" body=""
	I1009 18:23:06.241173   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:06.241497   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:06.741232   34792 type.go:168] "Request Body" body=""
	I1009 18:23:06.741295   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:06.741640   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:07.241402   34792 type.go:168] "Request Body" body=""
	I1009 18:23:07.241487   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:07.241813   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:07.241865   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:07.741620   34792 type.go:168] "Request Body" body=""
	I1009 18:23:07.741692   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:07.742021   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:08.241855   34792 type.go:168] "Request Body" body=""
	I1009 18:23:08.241917   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:08.242226   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:08.741000   34792 type.go:168] "Request Body" body=""
	I1009 18:23:08.741070   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:08.741419   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:09.241169   34792 type.go:168] "Request Body" body=""
	I1009 18:23:09.241236   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:09.241556   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:09.741160   34792 type.go:168] "Request Body" body=""
	I1009 18:23:09.741223   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:09.741542   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:09.741611   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:10.241116   34792 type.go:168] "Request Body" body=""
	I1009 18:23:10.241215   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:10.241545   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:10.741472   34792 type.go:168] "Request Body" body=""
	I1009 18:23:10.741586   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:10.741912   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:11.241739   34792 type.go:168] "Request Body" body=""
	I1009 18:23:11.241829   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:11.242195   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:11.632645   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:23:11.684065   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:23:11.686606   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:23:11.686651   34792 retry.go:31] will retry after 43.228082894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:23:11.741902   34792 type.go:168] "Request Body" body=""
	I1009 18:23:11.741967   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:11.742335   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:11.742398   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:12.241111   34792 type.go:168] "Request Body" body=""
	I1009 18:23:12.241221   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:12.241543   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:12.560933   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:23:12.614798   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:23:12.614843   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:23:12.614940   34792 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 18:23:12.741072   34792 type.go:168] "Request Body" body=""
	I1009 18:23:12.741169   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:12.741484   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:13.241057   34792 type.go:168] "Request Body" body=""
	I1009 18:23:13.241192   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:13.241516   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:13.741110   34792 type.go:168] "Request Body" body=""
	I1009 18:23:13.741196   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:13.741493   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:14.241244   34792 type.go:168] "Request Body" body=""
	I1009 18:23:14.241314   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:14.241686   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:14.241738   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:14.741425   34792 type.go:168] "Request Body" body=""
	I1009 18:23:14.741488   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:14.741803   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:15.241603   34792 type.go:168] "Request Body" body=""
	I1009 18:23:15.241664   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:15.241993   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:15.741872   34792 type.go:168] "Request Body" body=""
	I1009 18:23:15.741942   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:15.742284   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:16.241004   34792 type.go:168] "Request Body" body=""
	I1009 18:23:16.241108   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:16.241472   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:16.741281   34792 type.go:168] "Request Body" body=""
	I1009 18:23:16.741357   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:16.741657   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:16.741710   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:17.241427   34792 type.go:168] "Request Body" body=""
	I1009 18:23:17.241489   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:17.241829   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:17.741674   34792 type.go:168] "Request Body" body=""
	I1009 18:23:17.741762   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:17.742082   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:18.241893   34792 type.go:168] "Request Body" body=""
	I1009 18:23:18.241965   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:18.242388   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:18.741175   34792 type.go:168] "Request Body" body=""
	I1009 18:23:18.741239   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:18.741553   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:19.241408   34792 type.go:168] "Request Body" body=""
	I1009 18:23:19.241483   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:19.241852   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:19.241908   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:19.741678   34792 type.go:168] "Request Body" body=""
	I1009 18:23:19.741745   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:19.742039   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:20.241909   34792 type.go:168] "Request Body" body=""
	I1009 18:23:20.241972   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:20.242406   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:20.741268   34792 type.go:168] "Request Body" body=""
	I1009 18:23:20.741334   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:20.741646   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:21.241394   34792 type.go:168] "Request Body" body=""
	I1009 18:23:21.241459   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:21.241801   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:21.741624   34792 type.go:168] "Request Body" body=""
	I1009 18:23:21.741688   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:21.741997   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:21.742063   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:22.241916   34792 type.go:168] "Request Body" body=""
	I1009 18:23:22.241978   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:22.242380   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:22.741197   34792 type.go:168] "Request Body" body=""
	I1009 18:23:22.741265   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:22.741575   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:23.241312   34792 type.go:168] "Request Body" body=""
	I1009 18:23:23.241382   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:23.241731   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:23.741463   34792 type.go:168] "Request Body" body=""
	I1009 18:23:23.741537   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:23.741848   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:24.241654   34792 type.go:168] "Request Body" body=""
	I1009 18:23:24.241717   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:24.242059   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:24.242125   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:24.741910   34792 type.go:168] "Request Body" body=""
	I1009 18:23:24.741982   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:24.742333   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:25.241063   34792 type.go:168] "Request Body" body=""
	I1009 18:23:25.241128   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:25.241505   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:25.741559   34792 type.go:168] "Request Body" body=""
	I1009 18:23:25.741626   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:25.741933   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:26.241874   34792 type.go:168] "Request Body" body=""
	I1009 18:23:26.241956   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:26.242332   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:26.242390   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:26.741061   34792 type.go:168] "Request Body" body=""
	I1009 18:23:26.741125   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:26.741525   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:27.241264   34792 type.go:168] "Request Body" body=""
	I1009 18:23:27.241334   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:27.241644   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:27.741375   34792 type.go:168] "Request Body" body=""
	I1009 18:23:27.741438   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:27.741748   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:28.241487   34792 type.go:168] "Request Body" body=""
	I1009 18:23:28.241553   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:28.241862   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:28.741699   34792 type.go:168] "Request Body" body=""
	I1009 18:23:28.741767   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:28.742072   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:28.742126   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:29.241949   34792 type.go:168] "Request Body" body=""
	I1009 18:23:29.242051   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:29.242384   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:29.741054   34792 type.go:168] "Request Body" body=""
	I1009 18:23:29.741120   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:29.741440   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:30.241213   34792 type.go:168] "Request Body" body=""
	I1009 18:23:30.241289   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:30.241596   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:30.741484   34792 type.go:168] "Request Body" body=""
	I1009 18:23:30.741560   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:30.741926   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:31.241778   34792 type.go:168] "Request Body" body=""
	I1009 18:23:31.241839   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:31.242174   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:31.242227   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:31.740976   34792 type.go:168] "Request Body" body=""
	I1009 18:23:31.741038   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:31.741384   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:32.241106   34792 type.go:168] "Request Body" body=""
	I1009 18:23:32.241215   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:32.241567   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:32.741260   34792 type.go:168] "Request Body" body=""
	I1009 18:23:32.741352   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:32.741640   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:33.241340   34792 type.go:168] "Request Body" body=""
	I1009 18:23:33.241406   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:33.241743   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:33.741456   34792 type.go:168] "Request Body" body=""
	I1009 18:23:33.741516   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:33.741808   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:33.741862   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:34.241631   34792 type.go:168] "Request Body" body=""
	I1009 18:23:34.241695   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:34.242060   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:34.741908   34792 type.go:168] "Request Body" body=""
	I1009 18:23:34.741974   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:34.742307   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:35.241044   34792 type.go:168] "Request Body" body=""
	I1009 18:23:35.241113   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:35.241458   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:35.741288   34792 type.go:168] "Request Body" body=""
	I1009 18:23:35.741356   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:35.741670   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:36.241422   34792 type.go:168] "Request Body" body=""
	I1009 18:23:36.241483   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:36.241820   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:36.241874   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:36.741640   34792 type.go:168] "Request Body" body=""
	I1009 18:23:36.741707   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:36.742009   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:37.241833   34792 type.go:168] "Request Body" body=""
	I1009 18:23:37.241903   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:37.242258   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:37.740969   34792 type.go:168] "Request Body" body=""
	I1009 18:23:37.741033   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:37.741371   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:38.241096   34792 type.go:168] "Request Body" body=""
	I1009 18:23:38.241188   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:38.241533   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:38.741254   34792 type.go:168] "Request Body" body=""
	I1009 18:23:38.741330   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:38.741616   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:38.741669   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:39.241545   34792 type.go:168] "Request Body" body=""
	I1009 18:23:39.241620   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:39.241961   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:39.741751   34792 type.go:168] "Request Body" body=""
	I1009 18:23:39.741816   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:39.742174   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:40.241991   34792 type.go:168] "Request Body" body=""
	I1009 18:23:40.242060   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:40.242448   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:40.741260   34792 type.go:168] "Request Body" body=""
	I1009 18:23:40.741326   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:40.741641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:40.741695   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:41.241401   34792 type.go:168] "Request Body" body=""
	I1009 18:23:41.241463   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:41.241842   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:41.741321   34792 type.go:168] "Request Body" body=""
	I1009 18:23:41.741396   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:41.741709   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:42.241467   34792 type.go:168] "Request Body" body=""
	I1009 18:23:42.241529   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:42.241897   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:42.741700   34792 type.go:168] "Request Body" body=""
	I1009 18:23:42.741768   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:42.742079   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:42.742160   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:43.241914   34792 type.go:168] "Request Body" body=""
	I1009 18:23:43.241973   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:43.242318   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:43.741093   34792 type.go:168] "Request Body" body=""
	I1009 18:23:43.741186   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:43.741513   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:44.241263   34792 type.go:168] "Request Body" body=""
	I1009 18:23:44.241346   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:44.241690   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:44.741269   34792 type.go:168] "Request Body" body=""
	I1009 18:23:44.741339   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:44.741649   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:45.241373   34792 type.go:168] "Request Body" body=""
	I1009 18:23:45.241435   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:45.241795   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:45.241846   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:45.741727   34792 type.go:168] "Request Body" body=""
	I1009 18:23:45.741791   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:45.742097   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:46.241926   34792 type.go:168] "Request Body" body=""
	I1009 18:23:46.241996   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:46.242356   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:46.741120   34792 type.go:168] "Request Body" body=""
	I1009 18:23:46.741209   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:46.741602   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:47.241322   34792 type.go:168] "Request Body" body=""
	I1009 18:23:47.241391   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:47.241768   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:47.741575   34792 type.go:168] "Request Body" body=""
	I1009 18:23:47.741638   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:47.741939   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:47.741988   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:48.241711   34792 type.go:168] "Request Body" body=""
	I1009 18:23:48.241771   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:48.242111   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:48.741933   34792 type.go:168] "Request Body" body=""
	I1009 18:23:48.742004   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:48.742339   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:49.241046   34792 type.go:168] "Request Body" body=""
	I1009 18:23:49.241123   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:49.241511   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:49.741243   34792 type.go:168] "Request Body" body=""
	I1009 18:23:49.741308   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:49.741638   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:50.241345   34792 type.go:168] "Request Body" body=""
	I1009 18:23:50.241408   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:50.241740   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:50.241790   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:50.741667   34792 type.go:168] "Request Body" body=""
	I1009 18:23:50.741736   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:50.742048   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:51.241420   34792 type.go:168] "Request Body" body=""
	I1009 18:23:51.241491   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:51.241828   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:51.741669   34792 type.go:168] "Request Body" body=""
	I1009 18:23:51.741742   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:51.742050   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:52.241911   34792 type.go:168] "Request Body" body=""
	I1009 18:23:52.241973   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:52.242345   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:52.242396   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:52.741096   34792 type.go:168] "Request Body" body=""
	I1009 18:23:52.741186   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:52.741495   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:53.241277   34792 type.go:168] "Request Body" body=""
	I1009 18:23:53.241348   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:53.241731   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:53.741468   34792 type.go:168] "Request Body" body=""
	I1009 18:23:53.741553   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:53.741866   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:54.241666   34792 type.go:168] "Request Body" body=""
	I1009 18:23:54.241732   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:54.242078   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:54.741932   34792 type.go:168] "Request Body" body=""
	I1009 18:23:54.741997   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:54.742359   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:54.742411   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:54.915717   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:23:54.969064   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:23:54.969123   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:23:54.969226   34792 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 18:23:54.971206   34792 out.go:179] * Enabled addons: 
	I1009 18:23:54.972204   34792 addons.go:514] duration metric: took 1m55.845883827s for enable addons: enabled=[]
	I1009 18:23:55.241550   34792 type.go:168] "Request Body" body=""
	I1009 18:23:55.241625   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:55.241961   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:55.741824   34792 type.go:168] "Request Body" body=""
	I1009 18:23:55.741904   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:55.742290   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:56.241973   34792 type.go:168] "Request Body" body=""
	I1009 18:23:56.242123   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:56.242483   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:56.741036   34792 type.go:168] "Request Body" body=""
	I1009 18:23:56.741152   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:56.741467   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:57.241090   34792 type.go:168] "Request Body" body=""
	I1009 18:23:57.241200   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:57.241560   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:57.241611   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:57.741252   34792 type.go:168] "Request Body" body=""
	I1009 18:23:57.741334   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:57.741629   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:58.241447   34792 type.go:168] "Request Body" body=""
	I1009 18:23:58.241725   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:58.242009   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:58.741244   34792 type.go:168] "Request Body" body=""
	I1009 18:23:58.741314   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:58.741649   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:59.241582   34792 type.go:168] "Request Body" body=""
	I1009 18:23:59.241664   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:59.241976   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:59.242029   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:59.741645   34792 type.go:168] "Request Body" body=""
	I1009 18:23:59.741711   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:59.742016   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:00.241679   34792 type.go:168] "Request Body" body=""
	I1009 18:24:00.241745   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:00.242104   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:00.741941   34792 type.go:168] "Request Body" body=""
	I1009 18:24:00.742015   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:00.742375   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:01.240979   34792 type.go:168] "Request Body" body=""
	I1009 18:24:01.241079   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:01.241446   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:01.741104   34792 type.go:168] "Request Body" body=""
	I1009 18:24:01.741198   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:01.741536   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:01.741587   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:02.241191   34792 type.go:168] "Request Body" body=""
	I1009 18:24:02.241259   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:02.241560   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:02.741155   34792 type.go:168] "Request Body" body=""
	I1009 18:24:02.741230   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:02.741560   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:03.241230   34792 type.go:168] "Request Body" body=""
	I1009 18:24:03.241291   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:03.241606   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:03.741234   34792 type.go:168] "Request Body" body=""
	I1009 18:24:03.741320   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:03.741610   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:03.741659   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:04.241477   34792 type.go:168] "Request Body" body=""
	I1009 18:24:04.241610   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:04.241994   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:04.741666   34792 type.go:168] "Request Body" body=""
	I1009 18:24:04.741733   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:04.742049   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:05.241727   34792 type.go:168] "Request Body" body=""
	I1009 18:24:05.241807   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:05.242113   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:05.741949   34792 type.go:168] "Request Body" body=""
	I1009 18:24:05.742014   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:05.742361   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:05.742412   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:06.240966   34792 type.go:168] "Request Body" body=""
	I1009 18:24:06.241087   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:06.241438   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:06.741043   34792 type.go:168] "Request Body" body=""
	I1009 18:24:06.741125   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:06.741482   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:07.241180   34792 type.go:168] "Request Body" body=""
	I1009 18:24:07.241242   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:07.241557   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:07.741167   34792 type.go:168] "Request Body" body=""
	I1009 18:24:07.741259   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:07.741613   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:08.241236   34792 type.go:168] "Request Body" body=""
	I1009 18:24:08.241302   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:08.241607   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:08.241657   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:08.741270   34792 type.go:168] "Request Body" body=""
	I1009 18:24:08.741337   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:08.741689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:09.241656   34792 type.go:168] "Request Body" body=""
	I1009 18:24:09.241721   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:09.242060   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:09.741758   34792 type.go:168] "Request Body" body=""
	I1009 18:24:09.741832   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:09.742204   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:10.241854   34792 type.go:168] "Request Body" body=""
	I1009 18:24:10.241948   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:10.242297   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:10.242356   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:10.740989   34792 type.go:168] "Request Body" body=""
	I1009 18:24:10.741064   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:10.741405   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:11.242008   34792 type.go:168] "Request Body" body=""
	I1009 18:24:11.242096   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:11.242414   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:11.741019   34792 type.go:168] "Request Body" body=""
	I1009 18:24:11.741090   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:11.741443   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:12.241051   34792 type.go:168] "Request Body" body=""
	I1009 18:24:12.241127   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:12.241488   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:12.741129   34792 type.go:168] "Request Body" body=""
	I1009 18:24:12.741226   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:12.741564   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:12.741614   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:13.241115   34792 type.go:168] "Request Body" body=""
	I1009 18:24:13.241208   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:13.241540   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:13.741171   34792 type.go:168] "Request Body" body=""
	I1009 18:24:13.741235   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:13.741549   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:14.241221   34792 type.go:168] "Request Body" body=""
	I1009 18:24:14.241289   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:14.241613   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:14.741228   34792 type.go:168] "Request Body" body=""
	I1009 18:24:14.741294   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:14.741619   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:14.741670   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:15.241203   34792 type.go:168] "Request Body" body=""
	I1009 18:24:15.241266   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:15.241587   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:15.741480   34792 type.go:168] "Request Body" body=""
	I1009 18:24:15.741544   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:15.741911   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:16.241491   34792 type.go:168] "Request Body" body=""
	I1009 18:24:16.241558   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:16.241870   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:16.741517   34792 type.go:168] "Request Body" body=""
	I1009 18:24:16.741585   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:16.741911   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:16.741963   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:17.241588   34792 type.go:168] "Request Body" body=""
	I1009 18:24:17.241650   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:17.241989   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:17.741644   34792 type.go:168] "Request Body" body=""
	I1009 18:24:17.741710   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:17.742011   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:18.241688   34792 type.go:168] "Request Body" body=""
	I1009 18:24:18.241755   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:18.242125   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:18.741790   34792 type.go:168] "Request Body" body=""
	I1009 18:24:18.741854   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:18.742223   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:18.742290   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:19.242039   34792 type.go:168] "Request Body" body=""
	I1009 18:24:19.242109   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:19.242472   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:19.741076   34792 type.go:168] "Request Body" body=""
	I1009 18:24:19.741162   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:19.741541   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:20.241117   34792 type.go:168] "Request Body" body=""
	I1009 18:24:20.241204   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:20.241525   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:20.741486   34792 type.go:168] "Request Body" body=""
	I1009 18:24:20.741556   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:20.741868   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:21.241426   34792 type.go:168] "Request Body" body=""
	I1009 18:24:21.241498   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:21.241806   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:21.241862   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:21.741431   34792 type.go:168] "Request Body" body=""
	I1009 18:24:21.741537   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:21.741868   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:22.241461   34792 type.go:168] "Request Body" body=""
	I1009 18:24:22.241535   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:22.241849   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:22.741438   34792 type.go:168] "Request Body" body=""
	I1009 18:24:22.741501   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:22.741846   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:23.241408   34792 type.go:168] "Request Body" body=""
	I1009 18:24:23.241477   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:23.241783   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:23.741400   34792 type.go:168] "Request Body" body=""
	I1009 18:24:23.741470   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:23.741789   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:23.741845   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:24.241359   34792 type.go:168] "Request Body" body=""
	I1009 18:24:24.241431   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:24.241755   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:24.741348   34792 type.go:168] "Request Body" body=""
	I1009 18:24:24.741408   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:24.741733   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:25.241293   34792 type.go:168] "Request Body" body=""
	I1009 18:24:25.241374   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:25.241694   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:25.741621   34792 type.go:168] "Request Body" body=""
	I1009 18:24:25.741682   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:25.742037   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:25.742088   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:26.241707   34792 type.go:168] "Request Body" body=""
	I1009 18:24:26.241774   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:26.242098   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:26.741808   34792 type.go:168] "Request Body" body=""
	I1009 18:24:26.741871   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:26.742236   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:27.241893   34792 type.go:168] "Request Body" body=""
	I1009 18:24:27.241957   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:27.242307   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:27.741971   34792 type.go:168] "Request Body" body=""
	I1009 18:24:27.742039   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:27.742363   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:27.742412   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:28.240944   34792 type.go:168] "Request Body" body=""
	I1009 18:24:28.241012   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:28.241383   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:28.740967   34792 type.go:168] "Request Body" body=""
	I1009 18:24:28.741047   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:28.741411   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:29.241219   34792 type.go:168] "Request Body" body=""
	I1009 18:24:29.241290   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:29.241653   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:29.741274   34792 type.go:168] "Request Body" body=""
	I1009 18:24:29.741345   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:29.741655   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:30.241249   34792 type.go:168] "Request Body" body=""
	I1009 18:24:30.241326   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:30.241636   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:30.241689   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:30.741565   34792 type.go:168] "Request Body" body=""
	I1009 18:24:30.741637   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:30.741952   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:31.241609   34792 type.go:168] "Request Body" body=""
	I1009 18:24:31.241669   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:31.242013   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:31.741661   34792 type.go:168] "Request Body" body=""
	I1009 18:24:31.741727   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:31.742040   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:32.241675   34792 type.go:168] "Request Body" body=""
	I1009 18:24:32.241739   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:32.242047   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:32.242100   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:32.741353   34792 type.go:168] "Request Body" body=""
	I1009 18:24:32.741425   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:32.741746   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:33.241341   34792 type.go:168] "Request Body" body=""
	I1009 18:24:33.241401   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:33.241718   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:33.741321   34792 type.go:168] "Request Body" body=""
	I1009 18:24:33.741388   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:33.741692   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:34.241262   34792 type.go:168] "Request Body" body=""
	I1009 18:24:34.241326   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:34.241641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:34.741266   34792 type.go:168] "Request Body" body=""
	I1009 18:24:34.741339   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:34.741686   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:34.741740   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:35.241256   34792 type.go:168] "Request Body" body=""
	I1009 18:24:35.241332   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:35.241644   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:35.741557   34792 type.go:168] "Request Body" body=""
	I1009 18:24:35.741623   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:35.741960   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:36.241631   34792 type.go:168] "Request Body" body=""
	I1009 18:24:36.241698   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:36.242094   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:36.741738   34792 type.go:168] "Request Body" body=""
	I1009 18:24:36.741810   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:36.742164   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:36.742232   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:37.241811   34792 type.go:168] "Request Body" body=""
	I1009 18:24:37.241879   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:37.242219   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:37.741906   34792 type.go:168] "Request Body" body=""
	I1009 18:24:37.741972   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:37.742360   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:38.241974   34792 type.go:168] "Request Body" body=""
	I1009 18:24:38.242032   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:38.242406   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:38.740970   34792 type.go:168] "Request Body" body=""
	I1009 18:24:38.741038   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:38.741400   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:39.241238   34792 type.go:168] "Request Body" body=""
	I1009 18:24:39.241302   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:39.241642   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:39.241695   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:39.741304   34792 type.go:168] "Request Body" body=""
	I1009 18:24:39.741370   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:39.741689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:40.241283   34792 type.go:168] "Request Body" body=""
	I1009 18:24:40.241349   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:40.241689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:40.741596   34792 type.go:168] "Request Body" body=""
	I1009 18:24:40.741665   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:40.741992   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:41.241775   34792 type.go:168] "Request Body" body=""
	I1009 18:24:41.241853   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:41.242210   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:41.242282   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:41.741904   34792 type.go:168] "Request Body" body=""
	I1009 18:24:41.741970   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:41.742352   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:42.240959   34792 type.go:168] "Request Body" body=""
	I1009 18:24:42.241085   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:42.241411   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:42.741000   34792 type.go:168] "Request Body" body=""
	I1009 18:24:42.741063   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:42.741398   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:43.242037   34792 type.go:168] "Request Body" body=""
	I1009 18:24:43.242129   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:43.242476   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:43.242528   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:43.741058   34792 type.go:168] "Request Body" body=""
	I1009 18:24:43.741124   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:43.741463   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:44.241058   34792 type.go:168] "Request Body" body=""
	I1009 18:24:44.241159   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:44.241499   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:44.741068   34792 type.go:168] "Request Body" body=""
	I1009 18:24:44.741159   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:44.741472   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:45.241073   34792 type.go:168] "Request Body" body=""
	I1009 18:24:45.241155   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:45.241482   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:45.741464   34792 type.go:168] "Request Body" body=""
	I1009 18:24:45.741533   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:45.741834   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:45.741888   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:46.241484   34792 type.go:168] "Request Body" body=""
	I1009 18:24:46.241552   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:46.241885   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:46.741462   34792 type.go:168] "Request Body" body=""
	I1009 18:24:46.741538   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:46.741838   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:47.241422   34792 type.go:168] "Request Body" body=""
	I1009 18:24:47.241483   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:47.241808   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:47.741360   34792 type.go:168] "Request Body" body=""
	I1009 18:24:47.741425   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:47.741734   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:48.241415   34792 type.go:168] "Request Body" body=""
	I1009 18:24:48.241480   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:48.241802   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:48.241867   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:48.741335   34792 type.go:168] "Request Body" body=""
	I1009 18:24:48.741399   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:48.741718   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:49.241753   34792 type.go:168] "Request Body" body=""
	I1009 18:24:49.241820   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:49.242187   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:49.741848   34792 type.go:168] "Request Body" body=""
	I1009 18:24:49.741914   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:49.742284   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:50.242049   34792 type.go:168] "Request Body" body=""
	I1009 18:24:50.242115   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:50.242449   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:50.242500   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:50.741086   34792 type.go:168] "Request Body" body=""
	I1009 18:24:50.741198   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:50.741527   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:51.241098   34792 type.go:168] "Request Body" body=""
	I1009 18:24:51.241186   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:51.241495   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:51.741082   34792 type.go:168] "Request Body" body=""
	I1009 18:24:51.741183   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:51.741522   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:52.241121   34792 type.go:168] "Request Body" body=""
	I1009 18:24:52.241212   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:52.241508   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:52.741094   34792 type.go:168] "Request Body" body=""
	I1009 18:24:52.741203   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:52.741514   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:52.741572   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:53.241090   34792 type.go:168] "Request Body" body=""
	I1009 18:24:53.241183   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:53.241580   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:53.741218   34792 type.go:168] "Request Body" body=""
	I1009 18:24:53.741300   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:53.741630   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:54.241270   34792 type.go:168] "Request Body" body=""
	I1009 18:24:54.241352   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:54.241658   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:54.741241   34792 type.go:168] "Request Body" body=""
	I1009 18:24:54.741321   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:54.741636   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:54.741687   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:55.241234   34792 type.go:168] "Request Body" body=""
	I1009 18:24:55.241306   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:55.241626   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:55.741410   34792 type.go:168] "Request Body" body=""
	I1009 18:24:55.741479   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:55.741852   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:56.241427   34792 type.go:168] "Request Body" body=""
	I1009 18:24:56.241491   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:56.241834   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:56.741423   34792 type.go:168] "Request Body" body=""
	I1009 18:24:56.741492   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:56.741854   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:56.741921   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:57.241419   34792 type.go:168] "Request Body" body=""
	I1009 18:24:57.241484   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:57.241784   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:57.741337   34792 type.go:168] "Request Body" body=""
	I1009 18:24:57.741402   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:57.741768   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:58.241353   34792 type.go:168] "Request Body" body=""
	I1009 18:24:58.241420   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:58.241723   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:58.741285   34792 type.go:168] "Request Body" body=""
	I1009 18:24:58.741356   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:58.741698   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:59.241536   34792 type.go:168] "Request Body" body=""
	I1009 18:24:59.241601   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:59.241906   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:59.241970   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:59.741466   34792 type.go:168] "Request Body" body=""
	I1009 18:24:59.741528   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:59.741866   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:00.241421   34792 type.go:168] "Request Body" body=""
	I1009 18:25:00.241487   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:00.241800   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:00.741667   34792 type.go:168] "Request Body" body=""
	I1009 18:25:00.741748   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:00.742076   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:01.241775   34792 type.go:168] "Request Body" body=""
	I1009 18:25:01.241841   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:01.242226   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:01.242284   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:01.741879   34792 type.go:168] "Request Body" body=""
	I1009 18:25:01.741957   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:01.742330   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:02.241978   34792 type.go:168] "Request Body" body=""
	I1009 18:25:02.242041   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:02.242423   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:02.741029   34792 type.go:168] "Request Body" body=""
	I1009 18:25:02.741115   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:02.741462   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:03.241086   34792 type.go:168] "Request Body" body=""
	I1009 18:25:03.241179   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:03.241501   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:03.741018   34792 type.go:168] "Request Body" body=""
	I1009 18:25:03.741114   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:03.741476   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:03.741528   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:04.241053   34792 type.go:168] "Request Body" body=""
	I1009 18:25:04.241116   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:04.241452   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:04.741007   34792 type.go:168] "Request Body" body=""
	I1009 18:25:04.741083   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:04.741445   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:05.241037   34792 type.go:168] "Request Body" body=""
	I1009 18:25:05.241100   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:05.241427   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:05.741247   34792 type.go:168] "Request Body" body=""
	I1009 18:25:05.741321   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:05.741697   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:05.741771   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:06.241254   34792 type.go:168] "Request Body" body=""
	I1009 18:25:06.241327   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:06.241639   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:06.741286   34792 type.go:168] "Request Body" body=""
	I1009 18:25:06.741366   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:06.741735   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:07.241253   34792 type.go:168] "Request Body" body=""
	I1009 18:25:07.241322   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:07.241625   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:07.741217   34792 type.go:168] "Request Body" body=""
	I1009 18:25:07.741279   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:07.741640   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:08.241244   34792 type.go:168] "Request Body" body=""
	I1009 18:25:08.241315   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:08.241647   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:08.241711   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:08.741241   34792 type.go:168] "Request Body" body=""
	I1009 18:25:08.741304   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:08.741686   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:09.241716   34792 type.go:168] "Request Body" body=""
	I1009 18:25:09.241782   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:09.242124   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:09.741814   34792 type.go:168] "Request Body" body=""
	I1009 18:25:09.741880   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:09.742241   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:10.241918   34792 type.go:168] "Request Body" body=""
	I1009 18:25:10.241983   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:10.242339   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:10.242405   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:10.741070   34792 type.go:168] "Request Body" body=""
	I1009 18:25:10.741194   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:10.741554   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:11.241213   34792 type.go:168] "Request Body" body=""
	I1009 18:25:11.241281   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:11.241588   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:11.741236   34792 type.go:168] "Request Body" body=""
	I1009 18:25:11.741322   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:11.741656   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:12.241283   34792 type.go:168] "Request Body" body=""
	I1009 18:25:12.241345   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:12.241648   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:12.741253   34792 type.go:168] "Request Body" body=""
	I1009 18:25:12.741341   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:12.741670   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:12.741727   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:13.241274   34792 type.go:168] "Request Body" body=""
	I1009 18:25:13.241352   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:13.241660   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:13.741258   34792 type.go:168] "Request Body" body=""
	I1009 18:25:13.741346   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:13.741679   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:14.241260   34792 type.go:168] "Request Body" body=""
	I1009 18:25:14.241333   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:14.241686   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:14.741277   34792 type.go:168] "Request Body" body=""
	I1009 18:25:14.741354   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:14.741682   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:15.241247   34792 type.go:168] "Request Body" body=""
	I1009 18:25:15.241309   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:15.241612   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:15.241669   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:15.741488   34792 type.go:168] "Request Body" body=""
	I1009 18:25:15.741552   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:15.741890   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:16.241468   34792 type.go:168] "Request Body" body=""
	I1009 18:25:16.241537   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:16.241842   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:16.741415   34792 type.go:168] "Request Body" body=""
	I1009 18:25:16.741480   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:16.741850   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:17.241442   34792 type.go:168] "Request Body" body=""
	I1009 18:25:17.241504   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:17.241800   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:17.241861   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:17.741344   34792 type.go:168] "Request Body" body=""
	I1009 18:25:17.741411   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:17.741764   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:18.241362   34792 type.go:168] "Request Body" body=""
	I1009 18:25:18.241432   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:18.241786   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:18.741325   34792 type.go:168] "Request Body" body=""
	I1009 18:25:18.741390   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:18.741723   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:19.241633   34792 type.go:168] "Request Body" body=""
	I1009 18:25:19.241702   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:19.242011   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:19.242081   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:19.741669   34792 type.go:168] "Request Body" body=""
	I1009 18:25:19.741733   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:19.742064   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:20.241763   34792 type.go:168] "Request Body" body=""
	I1009 18:25:20.241826   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:20.242186   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:20.742053   34792 type.go:168] "Request Body" body=""
	I1009 18:25:20.742131   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:20.742513   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:21.241071   34792 type.go:168] "Request Body" body=""
	I1009 18:25:21.241171   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:21.241504   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:21.741088   34792 type.go:168] "Request Body" body=""
	I1009 18:25:21.741207   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:21.741536   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:21.741594   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:22.241126   34792 type.go:168] "Request Body" body=""
	I1009 18:25:22.241221   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:22.241545   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:22.741131   34792 type.go:168] "Request Body" body=""
	I1009 18:25:22.741233   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:22.741588   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:23.241178   34792 type.go:168] "Request Body" body=""
	I1009 18:25:23.241242   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:23.241568   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:23.741162   34792 type.go:168] "Request Body" body=""
	I1009 18:25:23.741242   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:23.741577   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:23.741627   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:24.241178   34792 type.go:168] "Request Body" body=""
	I1009 18:25:24.241246   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:24.241578   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:24.741188   34792 type.go:168] "Request Body" body=""
	I1009 18:25:24.741295   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:24.741619   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:25.241208   34792 type.go:168] "Request Body" body=""
	I1009 18:25:25.241275   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:25.241641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:25.741538   34792 type.go:168] "Request Body" body=""
	I1009 18:25:25.741597   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:25.741905   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:25.741979   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:26.241464   34792 type.go:168] "Request Body" body=""
	I1009 18:25:26.241527   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:26.241835   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:26.741401   34792 type.go:168] "Request Body" body=""
	I1009 18:25:26.741467   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:26.741780   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:27.241351   34792 type.go:168] "Request Body" body=""
	I1009 18:25:27.241416   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:27.241723   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:27.741308   34792 type.go:168] "Request Body" body=""
	I1009 18:25:27.741383   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:27.741695   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:28.241262   34792 type.go:168] "Request Body" body=""
	I1009 18:25:28.241331   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:28.241634   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:28.241696   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:28.741253   34792 type.go:168] "Request Body" body=""
	I1009 18:25:28.741315   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:28.741626   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:29.241574   34792 type.go:168] "Request Body" body=""
	I1009 18:25:29.241643   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:29.241986   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:29.741657   34792 type.go:168] "Request Body" body=""
	I1009 18:25:29.741719   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:29.742063   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:30.241739   34792 type.go:168] "Request Body" body=""
	I1009 18:25:30.241804   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:30.242168   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:30.242230   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:30.741968   34792 type.go:168] "Request Body" body=""
	I1009 18:25:30.742100   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:30.742470   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:31.241076   34792 type.go:168] "Request Body" body=""
	I1009 18:25:31.241171   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:31.241532   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:31.741177   34792 type.go:168] "Request Body" body=""
	I1009 18:25:31.741282   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:31.741624   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:32.241262   34792 type.go:168] "Request Body" body=""
	I1009 18:25:32.241340   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:32.241670   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:32.741275   34792 type.go:168] "Request Body" body=""
	I1009 18:25:32.741360   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:32.741742   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:32.741796   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:33.241329   34792 type.go:168] "Request Body" body=""
	I1009 18:25:33.241396   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:33.241697   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:33.741289   34792 type.go:168] "Request Body" body=""
	I1009 18:25:33.741384   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:33.741759   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:34.241368   34792 type.go:168] "Request Body" body=""
	I1009 18:25:34.241439   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:34.241760   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:34.741351   34792 type.go:168] "Request Body" body=""
	I1009 18:25:34.741428   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:34.741798   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:34.741864   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:35.241399   34792 type.go:168] "Request Body" body=""
	I1009 18:25:35.241491   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:35.241838   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:35.741772   34792 type.go:168] "Request Body" body=""
	I1009 18:25:35.741836   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:35.742224   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:36.242003   34792 type.go:168] "Request Body" body=""
	I1009 18:25:36.242076   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:36.242435   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:36.741028   34792 type.go:168] "Request Body" body=""
	I1009 18:25:36.741097   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:36.741464   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:37.241121   34792 type.go:168] "Request Body" body=""
	I1009 18:25:37.241212   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:37.241551   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:37.241620   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:37.741109   34792 type.go:168] "Request Body" body=""
	I1009 18:25:37.741219   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:37.741567   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:38.241177   34792 type.go:168] "Request Body" body=""
	I1009 18:25:38.241246   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:38.241629   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:38.741262   34792 type.go:168] "Request Body" body=""
	I1009 18:25:38.741325   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:38.741654   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:39.241652   34792 type.go:168] "Request Body" body=""
	I1009 18:25:39.241726   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:39.242067   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:39.242125   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:39.741736   34792 type.go:168] "Request Body" body=""
	I1009 18:25:39.741806   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:39.742215   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:40.241891   34792 type.go:168] "Request Body" body=""
	I1009 18:25:40.241956   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:40.242334   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:40.741050   34792 type.go:168] "Request Body" body=""
	I1009 18:25:40.741121   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:40.741479   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:41.241091   34792 type.go:168] "Request Body" body=""
	I1009 18:25:41.241192   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:41.241525   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:41.741118   34792 type.go:168] "Request Body" body=""
	I1009 18:25:41.741208   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:41.741569   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:41.741626   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:42.241220   34792 type.go:168] "Request Body" body=""
	I1009 18:25:42.241296   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:42.241609   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:42.741251   34792 type.go:168] "Request Body" body=""
	I1009 18:25:42.741318   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:42.741643   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:43.241341   34792 type.go:168] "Request Body" body=""
	I1009 18:25:43.241412   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:43.241736   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:43.741353   34792 type.go:168] "Request Body" body=""
	I1009 18:25:43.741418   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:43.741732   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:43.741785   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:44.241361   34792 type.go:168] "Request Body" body=""
	I1009 18:25:44.241434   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:44.241757   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:44.741332   34792 type.go:168] "Request Body" body=""
	I1009 18:25:44.741401   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:44.741760   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:45.241363   34792 type.go:168] "Request Body" body=""
	I1009 18:25:45.241438   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:45.241819   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:45.741752   34792 type.go:168] "Request Body" body=""
	I1009 18:25:45.741826   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:45.742224   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:45.742282   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:46.241931   34792 type.go:168] "Request Body" body=""
	I1009 18:25:46.242008   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:46.242395   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:46.740984   34792 type.go:168] "Request Body" body=""
	I1009 18:25:46.741081   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:46.741473   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:47.241027   34792 type.go:168] "Request Body" body=""
	I1009 18:25:47.241148   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:47.241536   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:47.741035   34792 type.go:168] "Request Body" body=""
	I1009 18:25:47.741101   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:47.741554   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:48.241082   34792 type.go:168] "Request Body" body=""
	I1009 18:25:48.241179   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:48.241496   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:48.241548   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:48.741082   34792 type.go:168] "Request Body" body=""
	I1009 18:25:48.741203   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:48.741562   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:49.241540   34792 type.go:168] "Request Body" body=""
	I1009 18:25:49.241609   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:49.241992   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:49.741668   34792 type.go:168] "Request Body" body=""
	I1009 18:25:49.741737   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:49.742062   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:50.241713   34792 type.go:168] "Request Body" body=""
	I1009 18:25:50.241779   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:50.242089   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:50.242165   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:50.741969   34792 type.go:168] "Request Body" body=""
	I1009 18:25:50.742080   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:50.742425   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:51.241055   34792 type.go:168] "Request Body" body=""
	I1009 18:25:51.241121   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:51.241485   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:51.741082   34792 type.go:168] "Request Body" body=""
	I1009 18:25:51.741170   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:51.741493   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:52.241115   34792 type.go:168] "Request Body" body=""
	I1009 18:25:52.241209   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:52.241541   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:52.741234   34792 type.go:168] "Request Body" body=""
	I1009 18:25:52.741307   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:52.741661   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:52.741713   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:53.241239   34792 type.go:168] "Request Body" body=""
	I1009 18:25:53.241326   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:53.241653   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:53.741250   34792 type.go:168] "Request Body" body=""
	I1009 18:25:53.741330   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:53.741655   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:54.241252   34792 type.go:168] "Request Body" body=""
	I1009 18:25:54.241357   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:54.241717   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:54.741298   34792 type.go:168] "Request Body" body=""
	I1009 18:25:54.741362   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:54.741680   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:54.741732   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:55.241249   34792 type.go:168] "Request Body" body=""
	I1009 18:25:55.241310   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:55.241707   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:55.741639   34792 type.go:168] "Request Body" body=""
	I1009 18:25:55.741703   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:55.742036   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:56.241666   34792 type.go:168] "Request Body" body=""
	I1009 18:25:56.241729   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:56.242065   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:56.741838   34792 type.go:168] "Request Body" body=""
	I1009 18:25:56.741901   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:56.742249   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:56.742310   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:57.241936   34792 type.go:168] "Request Body" body=""
	I1009 18:25:57.242047   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:57.242403   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:57.741073   34792 type.go:168] "Request Body" body=""
	I1009 18:25:57.741156   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:57.741453   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:58.241102   34792 type.go:168] "Request Body" body=""
	I1009 18:25:58.241189   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:58.241532   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:58.741625   34792 type.go:168] "Request Body" body=""
	I1009 18:25:58.741731   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:58.742069   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:59.241918   34792 type.go:168] "Request Body" body=""
	I1009 18:25:59.242002   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:59.242382   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:59.242433   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:59.741586   34792 type.go:168] "Request Body" body=""
	I1009 18:25:59.741680   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:59.742047   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:00.241712   34792 type.go:168] "Request Body" body=""
	I1009 18:26:00.241778   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:00.242123   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:00.741944   34792 type.go:168] "Request Body" body=""
	I1009 18:26:00.742006   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:00.742335   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:01.241998   34792 type.go:168] "Request Body" body=""
	I1009 18:26:01.242063   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:01.242409   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:01.242463   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:01.740980   34792 type.go:168] "Request Body" body=""
	I1009 18:26:01.741043   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:01.741380   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:02.240968   34792 type.go:168] "Request Body" body=""
	I1009 18:26:02.241034   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:02.241387   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:02.740965   34792 type.go:168] "Request Body" body=""
	I1009 18:26:02.741036   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:02.741361   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:03.241979   34792 type.go:168] "Request Body" body=""
	I1009 18:26:03.242041   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:03.242370   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:03.740968   34792 type.go:168] "Request Body" body=""
	I1009 18:26:03.741033   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:03.741362   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:03.741412   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:04.242040   34792 type.go:168] "Request Body" body=""
	I1009 18:26:04.242108   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:04.242468   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:04.741070   34792 type.go:168] "Request Body" body=""
	I1009 18:26:04.741158   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:04.741484   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:05.241044   34792 type.go:168] "Request Body" body=""
	I1009 18:26:05.241107   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:05.241461   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:05.741242   34792 type.go:168] "Request Body" body=""
	I1009 18:26:05.741305   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:05.741627   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:05.741678   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:06.241201   34792 type.go:168] "Request Body" body=""
	I1009 18:26:06.241271   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:06.241594   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:06.741216   34792 type.go:168] "Request Body" body=""
	I1009 18:26:06.741302   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:06.741638   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:07.241228   34792 type.go:168] "Request Body" body=""
	I1009 18:26:07.241309   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:07.241642   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:07.741295   34792 type.go:168] "Request Body" body=""
	I1009 18:26:07.741364   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:07.741662   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:07.741715   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:08.241237   34792 type.go:168] "Request Body" body=""
	I1009 18:26:08.241302   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:08.241600   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:08.741196   34792 type.go:168] "Request Body" body=""
	I1009 18:26:08.741257   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:08.741600   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:09.241564   34792 type.go:168] "Request Body" body=""
	I1009 18:26:09.241629   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:09.241949   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:09.741615   34792 type.go:168] "Request Body" body=""
	I1009 18:26:09.741680   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:09.741985   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:09.742040   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:10.241636   34792 type.go:168] "Request Body" body=""
	I1009 18:26:10.241706   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:10.242002   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:10.741894   34792 type.go:168] "Request Body" body=""
	I1009 18:26:10.741959   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:10.742285   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:11.241928   34792 type.go:168] "Request Body" body=""
	I1009 18:26:11.241997   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:11.242350   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:11.742032   34792 type.go:168] "Request Body" body=""
	I1009 18:26:11.742100   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:11.742451   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:11.742508   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:12.241054   34792 type.go:168] "Request Body" body=""
	I1009 18:26:12.241123   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:12.241536   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:12.741176   34792 type.go:168] "Request Body" body=""
	I1009 18:26:12.741242   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:12.741599   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:13.241179   34792 type.go:168] "Request Body" body=""
	I1009 18:26:13.241237   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:13.241552   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:13.741164   34792 type.go:168] "Request Body" body=""
	I1009 18:26:13.741229   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:13.741597   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:14.241174   34792 type.go:168] "Request Body" body=""
	I1009 18:26:14.241246   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:14.241576   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:14.241632   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:14.741184   34792 type.go:168] "Request Body" body=""
	I1009 18:26:14.741250   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:14.741553   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:15.241116   34792 type.go:168] "Request Body" body=""
	I1009 18:26:15.241224   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:15.241537   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:15.741317   34792 type.go:168] "Request Body" body=""
	I1009 18:26:15.741389   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:15.741689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:16.241241   34792 type.go:168] "Request Body" body=""
	I1009 18:26:16.241305   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:16.241632   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:16.241683   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:16.741260   34792 type.go:168] "Request Body" body=""
	I1009 18:26:16.741325   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:16.741630   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:17.241224   34792 type.go:168] "Request Body" body=""
	I1009 18:26:17.241286   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:17.241599   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:17.741225   34792 type.go:168] "Request Body" body=""
	I1009 18:26:17.741291   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:17.741594   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:18.241198   34792 type.go:168] "Request Body" body=""
	I1009 18:26:18.241264   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:18.241577   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:18.741185   34792 type.go:168] "Request Body" body=""
	I1009 18:26:18.741257   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:18.741577   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:18.741626   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:19.241353   34792 type.go:168] "Request Body" body=""
	I1009 18:26:19.241426   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:19.241744   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:19.741299   34792 type.go:168] "Request Body" body=""
	I1009 18:26:19.741364   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:19.741663   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:20.241246   34792 type.go:168] "Request Body" body=""
	I1009 18:26:20.241316   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:20.241629   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:20.741541   34792 type.go:168] "Request Body" body=""
	I1009 18:26:20.741607   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:20.741914   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:20.741966   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:21.241518   34792 type.go:168] "Request Body" body=""
	I1009 18:26:21.241583   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:21.241885   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:21.741448   34792 type.go:168] "Request Body" body=""
	I1009 18:26:21.741515   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:21.741816   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:22.241407   34792 type.go:168] "Request Body" body=""
	I1009 18:26:22.241471   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:22.241770   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:22.741331   34792 type.go:168] "Request Body" body=""
	I1009 18:26:22.741400   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:22.741698   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:23.241258   34792 type.go:168] "Request Body" body=""
	I1009 18:26:23.241325   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:23.241638   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:23.241693   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:23.741220   34792 type.go:168] "Request Body" body=""
	I1009 18:26:23.741300   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:23.741602   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:24.241221   34792 type.go:168] "Request Body" body=""
	I1009 18:26:24.241295   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:24.241598   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:24.741133   34792 type.go:168] "Request Body" body=""
	I1009 18:26:24.741216   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:24.741539   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:25.241114   34792 type.go:168] "Request Body" body=""
	I1009 18:26:25.241213   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:25.241546   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:25.741511   34792 type.go:168] "Request Body" body=""
	I1009 18:26:25.741576   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:25.741865   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:25.741922   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:26.241516   34792 type.go:168] "Request Body" body=""
	I1009 18:26:26.241579   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:26.241882   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:26.741449   34792 type.go:168] "Request Body" body=""
	I1009 18:26:26.741511   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:26.741816   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:27.241391   34792 type.go:168] "Request Body" body=""
	I1009 18:26:27.241460   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:27.241802   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:27.741394   34792 type.go:168] "Request Body" body=""
	I1009 18:26:27.741461   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:27.741756   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:28.241317   34792 type.go:168] "Request Body" body=""
	I1009 18:26:28.241388   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:28.241721   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:28.241777   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:28.741288   34792 type.go:168] "Request Body" body=""
	I1009 18:26:28.741355   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:28.741648   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:29.241543   34792 type.go:168] "Request Body" body=""
	I1009 18:26:29.241610   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:29.241914   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:29.741477   34792 type.go:168] "Request Body" body=""
	I1009 18:26:29.741542   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:29.741838   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:30.241416   34792 type.go:168] "Request Body" body=""
	I1009 18:26:30.241476   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:30.241809   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:30.241861   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:30.741676   34792 type.go:168] "Request Body" body=""
	I1009 18:26:30.741745   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:30.742049   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:31.241791   34792 type.go:168] "Request Body" body=""
	I1009 18:26:31.241858   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:31.242183   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:31.741839   34792 type.go:168] "Request Body" body=""
	I1009 18:26:31.741908   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:31.742213   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:32.241895   34792 type.go:168] "Request Body" body=""
	I1009 18:26:32.241956   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:32.242308   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:32.242358   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:32.741973   34792 type.go:168] "Request Body" body=""
	I1009 18:26:32.742037   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:32.742358   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:33.241033   34792 type.go:168] "Request Body" body=""
	I1009 18:26:33.241095   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:33.241444   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:33.741092   34792 type.go:168] "Request Body" body=""
	I1009 18:26:33.741183   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:33.741483   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:34.241043   34792 type.go:168] "Request Body" body=""
	I1009 18:26:34.241106   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:34.241473   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:34.741040   34792 type.go:168] "Request Body" body=""
	I1009 18:26:34.741103   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:34.741434   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:34.741487   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:35.241090   34792 type.go:168] "Request Body" body=""
	I1009 18:26:35.241193   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:35.241503   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:35.741438   34792 type.go:168] "Request Body" body=""
	I1009 18:26:35.741506   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:35.741812   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:36.241366   34792 type.go:168] "Request Body" body=""
	I1009 18:26:36.241429   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:36.241735   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:36.741315   34792 type.go:168] "Request Body" body=""
	I1009 18:26:36.741379   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:36.741698   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:36.741752   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:37.241310   34792 type.go:168] "Request Body" body=""
	I1009 18:26:37.241385   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:37.241689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:37.741251   34792 type.go:168] "Request Body" body=""
	I1009 18:26:37.741329   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:37.741650   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:38.241235   34792 type.go:168] "Request Body" body=""
	I1009 18:26:38.241299   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:38.241604   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:38.741249   34792 type.go:168] "Request Body" body=""
	I1009 18:26:38.741311   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:38.741610   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:39.241542   34792 type.go:168] "Request Body" body=""
	I1009 18:26:39.241604   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:39.241903   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:39.241956   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:39.741468   34792 type.go:168] "Request Body" body=""
	I1009 18:26:39.741530   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:39.741834   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:40.241427   34792 type.go:168] "Request Body" body=""
	I1009 18:26:40.241499   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:40.241835   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:40.741723   34792 type.go:168] "Request Body" body=""
	I1009 18:26:40.741789   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:40.742120   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:41.241751   34792 type.go:168] "Request Body" body=""
	I1009 18:26:41.241818   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:41.242203   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:41.242264   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:41.741856   34792 type.go:168] "Request Body" body=""
	I1009 18:26:41.741921   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:41.742256   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:42.241895   34792 type.go:168] "Request Body" body=""
	I1009 18:26:42.241958   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:42.242315   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:42.741994   34792 type.go:168] "Request Body" body=""
	I1009 18:26:42.742065   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:42.742389   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:43.240973   34792 type.go:168] "Request Body" body=""
	I1009 18:26:43.241061   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:43.241393   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:43.740990   34792 type.go:168] "Request Body" body=""
	I1009 18:26:43.741062   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:43.741419   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:43.741468   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:44.241000   34792 type.go:168] "Request Body" body=""
	I1009 18:26:44.241064   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:44.241416   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:44.740980   34792 type.go:168] "Request Body" body=""
	I1009 18:26:44.741068   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:44.741391   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:45.241003   34792 type.go:168] "Request Body" body=""
	I1009 18:26:45.241071   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:45.241415   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:45.741236   34792 type.go:168] "Request Body" body=""
	I1009 18:26:45.741300   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:45.741605   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:45.741660   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:46.241187   34792 type.go:168] "Request Body" body=""
	I1009 18:26:46.241257   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:46.241559   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:46.741123   34792 type.go:168] "Request Body" body=""
	I1009 18:26:46.741200   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:46.741513   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:47.241090   34792 type.go:168] "Request Body" body=""
	I1009 18:26:47.241182   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:47.241488   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:47.741079   34792 type.go:168] "Request Body" body=""
	I1009 18:26:47.741166   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:47.741472   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:48.241093   34792 type.go:168] "Request Body" body=""
	I1009 18:26:48.241186   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:48.241592   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:48.241645   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:48.741196   34792 type.go:168] "Request Body" body=""
	I1009 18:26:48.741263   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:48.741567   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:49.241340   34792 type.go:168] "Request Body" body=""
	I1009 18:26:49.241413   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:49.241715   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:49.741320   34792 type.go:168] "Request Body" body=""
	I1009 18:26:49.741390   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:49.741693   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:50.241274   34792 type.go:168] "Request Body" body=""
	I1009 18:26:50.241356   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:50.241686   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:50.241739   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:50.741604   34792 type.go:168] "Request Body" body=""
	I1009 18:26:50.741672   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:50.741979   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:51.241631   34792 type.go:168] "Request Body" body=""
	I1009 18:26:51.241697   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:51.242059   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:51.741717   34792 type.go:168] "Request Body" body=""
	I1009 18:26:51.741781   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:51.742121   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:52.241772   34792 type.go:168] "Request Body" body=""
	I1009 18:26:52.241840   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:52.242193   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:52.242249   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:52.741892   34792 type.go:168] "Request Body" body=""
	I1009 18:26:52.741970   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:52.742329   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:53.241997   34792 type.go:168] "Request Body" body=""
	I1009 18:26:53.242075   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:53.242417   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:53.741024   34792 type.go:168] "Request Body" body=""
	I1009 18:26:53.741093   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:53.741440   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:54.241044   34792 type.go:168] "Request Body" body=""
	I1009 18:26:54.241125   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:54.241492   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:54.741067   34792 type.go:168] "Request Body" body=""
	I1009 18:26:54.741161   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:54.741529   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:54.741583   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:55.241129   34792 type.go:168] "Request Body" body=""
	I1009 18:26:55.241221   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:55.241609   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:55.741431   34792 type.go:168] "Request Body" body=""
	I1009 18:26:55.741496   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:55.741812   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:56.241424   34792 type.go:168] "Request Body" body=""
	I1009 18:26:56.241490   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:56.241796   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:56.741393   34792 type.go:168] "Request Body" body=""
	I1009 18:26:56.741462   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:56.741773   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:56.741826   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:57.241378   34792 type.go:168] "Request Body" body=""
	I1009 18:26:57.241453   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:57.241771   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:57.741379   34792 type.go:168] "Request Body" body=""
	I1009 18:26:57.741447   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:57.741762   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:58.241330   34792 type.go:168] "Request Body" body=""
	I1009 18:26:58.241413   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:58.241723   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:58.741322   34792 type.go:168] "Request Body" body=""
	I1009 18:26:58.741396   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:58.741713   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:59.241600   34792 type.go:168] "Request Body" body=""
	I1009 18:26:59.241669   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:59.241990   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:59.242043   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:59.741668   34792 type.go:168] "Request Body" body=""
	I1009 18:26:59.741732   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:59.742052   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:00.241717   34792 type.go:168] "Request Body" body=""
	I1009 18:27:00.241783   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:00.242095   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:00.741931   34792 type.go:168] "Request Body" body=""
	I1009 18:27:00.742008   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:00.742337   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:01.242007   34792 type.go:168] "Request Body" body=""
	I1009 18:27:01.242099   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:01.242479   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:01.242534   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:01.741056   34792 type.go:168] "Request Body" body=""
	I1009 18:27:01.741158   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:01.741495   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:02.241218   34792 type.go:168] "Request Body" body=""
	I1009 18:27:02.241281   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:02.241609   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:02.741259   34792 type.go:168] "Request Body" body=""
	I1009 18:27:02.741340   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:02.741682   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:03.241295   34792 type.go:168] "Request Body" body=""
	I1009 18:27:03.241359   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:03.241698   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:03.741242   34792 type.go:168] "Request Body" body=""
	I1009 18:27:03.741308   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:03.741628   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:03.741679   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:04.241208   34792 type.go:168] "Request Body" body=""
	I1009 18:27:04.241270   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:04.241627   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:04.741229   34792 type.go:168] "Request Body" body=""
	I1009 18:27:04.741287   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:04.741583   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:05.241255   34792 type.go:168] "Request Body" body=""
	I1009 18:27:05.241340   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:05.241742   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:05.741635   34792 type.go:168] "Request Body" body=""
	I1009 18:27:05.741703   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:05.742066   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:05.742130   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:06.241658   34792 type.go:168] "Request Body" body=""
	I1009 18:27:06.241731   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:06.242079   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:06.741854   34792 type.go:168] "Request Body" body=""
	I1009 18:27:06.741922   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:06.742243   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:07.241927   34792 type.go:168] "Request Body" body=""
	I1009 18:27:07.241997   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:07.242459   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:07.741045   34792 type.go:168] "Request Body" body=""
	I1009 18:27:07.741126   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:07.741466   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:08.241033   34792 type.go:168] "Request Body" body=""
	I1009 18:27:08.241100   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:08.241458   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:08.241511   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:08.741034   34792 type.go:168] "Request Body" body=""
	I1009 18:27:08.741096   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:08.741406   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:09.241378   34792 type.go:168] "Request Body" body=""
	I1009 18:27:09.241439   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:09.241764   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:09.741349   34792 type.go:168] "Request Body" body=""
	I1009 18:27:09.741417   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:09.741711   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:10.241285   34792 type.go:168] "Request Body" body=""
	I1009 18:27:10.241365   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:10.241692   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:10.241753   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:10.741690   34792 type.go:168] "Request Body" body=""
	I1009 18:27:10.741757   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:10.742128   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:11.241848   34792 type.go:168] "Request Body" body=""
	I1009 18:27:11.241913   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:11.242250   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:11.741958   34792 type.go:168] "Request Body" body=""
	I1009 18:27:11.742022   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:11.742364   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:12.240970   34792 type.go:168] "Request Body" body=""
	I1009 18:27:12.241079   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:12.241437   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:12.741083   34792 type.go:168] "Request Body" body=""
	I1009 18:27:12.741169   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:12.741518   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:12.741570   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:13.241130   34792 type.go:168] "Request Body" body=""
	I1009 18:27:13.241246   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:13.241579   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:13.741161   34792 type.go:168] "Request Body" body=""
	I1009 18:27:13.741231   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:13.741554   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:14.241185   34792 type.go:168] "Request Body" body=""
	I1009 18:27:14.241247   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:14.241557   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:14.741128   34792 type.go:168] "Request Body" body=""
	I1009 18:27:14.741223   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:14.741560   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:14.741616   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:15.241160   34792 type.go:168] "Request Body" body=""
	I1009 18:27:15.241231   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:15.241537   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:15.741362   34792 type.go:168] "Request Body" body=""
	I1009 18:27:15.741426   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:15.741731   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:16.241332   34792 type.go:168] "Request Body" body=""
	I1009 18:27:16.241395   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:16.241711   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:16.741290   34792 type.go:168] "Request Body" body=""
	I1009 18:27:16.741362   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:16.741691   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:16.741746   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:17.241296   34792 type.go:168] "Request Body" body=""
	I1009 18:27:17.241365   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:17.241677   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:17.741260   34792 type.go:168] "Request Body" body=""
	I1009 18:27:17.741330   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:17.741645   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:18.241233   34792 type.go:168] "Request Body" body=""
	I1009 18:27:18.241315   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:18.241649   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:18.741254   34792 type.go:168] "Request Body" body=""
	I1009 18:27:18.741327   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:18.741641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:19.241576   34792 type.go:168] "Request Body" body=""
	I1009 18:27:19.241642   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:19.241965   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:19.242017   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:19.741671   34792 type.go:168] "Request Body" body=""
	I1009 18:27:19.741744   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:19.742057   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:20.241721   34792 type.go:168] "Request Body" body=""
	I1009 18:27:20.241782   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:20.242076   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:20.742009   34792 type.go:168] "Request Body" body=""
	I1009 18:27:20.742090   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:20.742453   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:21.241057   34792 type.go:168] "Request Body" body=""
	I1009 18:27:21.241122   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:21.241467   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:21.741089   34792 type.go:168] "Request Body" body=""
	I1009 18:27:21.741181   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:21.741490   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:21.741542   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:22.241108   34792 type.go:168] "Request Body" body=""
	I1009 18:27:22.241209   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:22.241541   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:22.741234   34792 type.go:168] "Request Body" body=""
	I1009 18:27:22.741302   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:22.741654   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:23.241319   34792 type.go:168] "Request Body" body=""
	I1009 18:27:23.241387   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:23.241701   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:23.741234   34792 type.go:168] "Request Body" body=""
	I1009 18:27:23.741296   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:23.741605   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:23.741658   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:24.241213   34792 type.go:168] "Request Body" body=""
	I1009 18:27:24.241289   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:24.241598   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:24.741228   34792 type.go:168] "Request Body" body=""
	I1009 18:27:24.741292   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:24.741613   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:25.241253   34792 type.go:168] "Request Body" body=""
	I1009 18:27:25.241322   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:25.241625   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:25.741545   34792 type.go:168] "Request Body" body=""
	I1009 18:27:25.741614   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:25.741927   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:25.742024   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:26.241505   34792 type.go:168] "Request Body" body=""
	I1009 18:27:26.241567   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:26.241878   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:26.741454   34792 type.go:168] "Request Body" body=""
	I1009 18:27:26.741518   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:26.741875   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:27.241441   34792 type.go:168] "Request Body" body=""
	I1009 18:27:27.241506   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:27.241818   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:27.741400   34792 type.go:168] "Request Body" body=""
	I1009 18:27:27.741470   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:27.741797   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:28.241401   34792 type.go:168] "Request Body" body=""
	I1009 18:27:28.241474   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:28.241808   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:28.241862   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:28.741402   34792 type.go:168] "Request Body" body=""
	I1009 18:27:28.741472   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:28.741806   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:29.241748   34792 type.go:168] "Request Body" body=""
	I1009 18:27:29.241819   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:29.242161   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:29.741821   34792 type.go:168] "Request Body" body=""
	I1009 18:27:29.741885   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:29.742231   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:30.241904   34792 type.go:168] "Request Body" body=""
	I1009 18:27:30.241974   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:30.242318   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:30.242382   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:30.741035   34792 type.go:168] "Request Body" body=""
	I1009 18:27:30.741108   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:30.741409   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:31.241068   34792 type.go:168] "Request Body" body=""
	I1009 18:27:31.241132   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:31.241479   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:31.741086   34792 type.go:168] "Request Body" body=""
	I1009 18:27:31.741176   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:31.741471   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:32.241219   34792 type.go:168] "Request Body" body=""
	I1009 18:27:32.241295   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:32.241610   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:32.741219   34792 type.go:168] "Request Body" body=""
	I1009 18:27:32.741298   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:32.741606   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:32.741661   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:33.241210   34792 type.go:168] "Request Body" body=""
	I1009 18:27:33.241276   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:33.241588   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:33.741182   34792 type.go:168] "Request Body" body=""
	I1009 18:27:33.741248   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:33.741547   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:34.241192   34792 type.go:168] "Request Body" body=""
	I1009 18:27:34.241262   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:34.241590   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:34.741212   34792 type.go:168] "Request Body" body=""
	I1009 18:27:34.741284   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:34.741609   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:35.241253   34792 type.go:168] "Request Body" body=""
	I1009 18:27:35.241323   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:35.241649   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:35.241703   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:35.741567   34792 type.go:168] "Request Body" body=""
	I1009 18:27:35.741632   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:35.741973   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:36.241654   34792 type.go:168] "Request Body" body=""
	I1009 18:27:36.241728   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:36.242025   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:36.741778   34792 type.go:168] "Request Body" body=""
	I1009 18:27:36.741844   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:36.742212   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:37.241852   34792 type.go:168] "Request Body" body=""
	I1009 18:27:37.241925   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:37.242276   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:37.242330   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:37.741978   34792 type.go:168] "Request Body" body=""
	I1009 18:27:37.742052   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:37.742377   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:38.240952   34792 type.go:168] "Request Body" body=""
	I1009 18:27:38.241027   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:38.241428   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:38.741115   34792 type.go:168] "Request Body" body=""
	I1009 18:27:38.741222   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:38.741569   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:39.241464   34792 type.go:168] "Request Body" body=""
	I1009 18:27:39.241531   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:39.241853   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:39.741475   34792 type.go:168] "Request Body" body=""
	I1009 18:27:39.741552   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:39.741888   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:39.741940   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:40.241482   34792 type.go:168] "Request Body" body=""
	I1009 18:27:40.241546   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:40.241865   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:40.741822   34792 type.go:168] "Request Body" body=""
	I1009 18:27:40.741912   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:40.742310   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:41.241924   34792 type.go:168] "Request Body" body=""
	I1009 18:27:41.241992   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:41.242352   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:41.742037   34792 type.go:168] "Request Body" body=""
	I1009 18:27:41.742123   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:41.742467   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:41.742533   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:42.241062   34792 type.go:168] "Request Body" body=""
	I1009 18:27:42.241131   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:42.241483   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:42.741199   34792 type.go:168] "Request Body" body=""
	I1009 18:27:42.741261   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:42.741576   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:43.241209   34792 type.go:168] "Request Body" body=""
	I1009 18:27:43.241285   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:43.241620   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:43.741257   34792 type.go:168] "Request Body" body=""
	I1009 18:27:43.741321   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:43.741675   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:44.241258   34792 type.go:168] "Request Body" body=""
	I1009 18:27:44.241325   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:44.241630   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:44.241684   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:44.741229   34792 type.go:168] "Request Body" body=""
	I1009 18:27:44.741292   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:44.741621   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:45.241009   34792 type.go:168] "Request Body" body=""
	I1009 18:27:45.241089   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:45.241464   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:45.741255   34792 type.go:168] "Request Body" body=""
	I1009 18:27:45.741321   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:45.741658   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:46.241261   34792 type.go:168] "Request Body" body=""
	I1009 18:27:46.241333   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:46.241687   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:46.241736   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:46.741271   34792 type.go:168] "Request Body" body=""
	I1009 18:27:46.741338   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:46.741695   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:47.241266   34792 type.go:168] "Request Body" body=""
	I1009 18:27:47.241341   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:47.241666   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:47.741243   34792 type.go:168] "Request Body" body=""
	I1009 18:27:47.741310   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:47.741653   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:48.241251   34792 type.go:168] "Request Body" body=""
	I1009 18:27:48.241342   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:48.241651   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:48.741262   34792 type.go:168] "Request Body" body=""
	I1009 18:27:48.741328   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:48.741647   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:48.741699   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:49.241692   34792 type.go:168] "Request Body" body=""
	I1009 18:27:49.241772   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:49.242116   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:49.741779   34792 type.go:168] "Request Body" body=""
	I1009 18:27:49.741846   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:49.742256   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:50.241914   34792 type.go:168] "Request Body" body=""
	I1009 18:27:50.241978   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:50.242357   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:50.741207   34792 type.go:168] "Request Body" body=""
	I1009 18:27:50.741284   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:50.741645   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:51.241236   34792 type.go:168] "Request Body" body=""
	I1009 18:27:51.241313   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:51.241642   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:51.241696   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:51.741256   34792 type.go:168] "Request Body" body=""
	I1009 18:27:51.741385   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:51.741740   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:52.241321   34792 type.go:168] "Request Body" body=""
	I1009 18:27:52.241392   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:52.241724   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:52.741315   34792 type.go:168] "Request Body" body=""
	I1009 18:27:52.741382   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:52.741729   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:53.241330   34792 type.go:168] "Request Body" body=""
	I1009 18:27:53.241398   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:53.241736   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:53.241797   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:53.741402   34792 type.go:168] "Request Body" body=""
	I1009 18:27:53.741465   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:53.741821   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:54.241418   34792 type.go:168] "Request Body" body=""
	I1009 18:27:54.241482   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:54.241803   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:54.741399   34792 type.go:168] "Request Body" body=""
	I1009 18:27:54.741462   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:54.741794   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:55.241395   34792 type.go:168] "Request Body" body=""
	I1009 18:27:55.241460   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:55.241801   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:55.241851   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:55.741689   34792 type.go:168] "Request Body" body=""
	I1009 18:27:55.741763   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:55.742091   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:56.241733   34792 type.go:168] "Request Body" body=""
	I1009 18:27:56.241801   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:56.242128   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:56.741823   34792 type.go:168] "Request Body" body=""
	I1009 18:27:56.741896   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:56.742277   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:57.241950   34792 type.go:168] "Request Body" body=""
	I1009 18:27:57.242025   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:57.242395   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:57.242451   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:57.741025   34792 type.go:168] "Request Body" body=""
	I1009 18:27:57.741093   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:57.741454   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:58.241127   34792 type.go:168] "Request Body" body=""
	I1009 18:27:58.241225   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:58.241560   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:58.741208   34792 type.go:168] "Request Body" body=""
	I1009 18:27:58.741281   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:58.741640   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:59.241113   34792 node_ready.go:38] duration metric: took 6m0.000256287s for node "functional-753440" to be "Ready" ...
	I1009 18:27:59.244464   34792 out.go:203] 
	W1009 18:27:59.246567   34792 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 18:27:59.246590   34792 out.go:285] * 
	W1009 18:27:59.248293   34792 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:27:59.250105   34792 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 18:27:53 functional-753440 crio[2938]: time="2025-10-09T18:27:53.566345887Z" level=info msg="createCtr: removing container 0c18fe4878c761a30a5eee30b1a575cf451c5fd072fd1925eb3fd4c8f81a8c06" id=51dd4f2d-4e2d-4f9e-9f26-7bd205ff224f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:27:53 functional-753440 crio[2938]: time="2025-10-09T18:27:53.566386487Z" level=info msg="createCtr: deleting container 0c18fe4878c761a30a5eee30b1a575cf451c5fd072fd1925eb3fd4c8f81a8c06 from storage" id=51dd4f2d-4e2d-4f9e-9f26-7bd205ff224f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:27:53 functional-753440 crio[2938]: time="2025-10-09T18:27:53.568500213Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-753440_kube-system_d8200e5d2f7672a0974c7d953c472e15_0" id=51dd4f2d-4e2d-4f9e-9f26-7bd205ff224f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:27:58 functional-753440 crio[2938]: time="2025-10-09T18:27:58.542707264Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=b5521a23-7c25-42ae-abf0-dde4a140797e name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:27:58 functional-753440 crio[2938]: time="2025-10-09T18:27:58.543570715Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=3aa3e8bd-dc22-490b-8c40-4a2ae736a440 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:27:58 functional-753440 crio[2938]: time="2025-10-09T18:27:58.544555038Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-753440/kube-controller-manager" id=0680e5b7-4641-42be-bfb6-dfa9e93a4d4b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:27:58 functional-753440 crio[2938]: time="2025-10-09T18:27:58.544769504Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:27:58 functional-753440 crio[2938]: time="2025-10-09T18:27:58.548056049Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:27:58 functional-753440 crio[2938]: time="2025-10-09T18:27:58.548479934Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:27:58 functional-753440 crio[2938]: time="2025-10-09T18:27:58.567649428Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=0680e5b7-4641-42be-bfb6-dfa9e93a4d4b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:27:58 functional-753440 crio[2938]: time="2025-10-09T18:27:58.569164247Z" level=info msg="createCtr: deleting container ID 44cc920dbd6720b1f12608fd0a870e869fd6904251296b8ad12e2b688c1490f2 from idIndex" id=0680e5b7-4641-42be-bfb6-dfa9e93a4d4b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:27:58 functional-753440 crio[2938]: time="2025-10-09T18:27:58.569212562Z" level=info msg="createCtr: removing container 44cc920dbd6720b1f12608fd0a870e869fd6904251296b8ad12e2b688c1490f2" id=0680e5b7-4641-42be-bfb6-dfa9e93a4d4b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:27:58 functional-753440 crio[2938]: time="2025-10-09T18:27:58.569243649Z" level=info msg="createCtr: deleting container 44cc920dbd6720b1f12608fd0a870e869fd6904251296b8ad12e2b688c1490f2 from storage" id=0680e5b7-4641-42be-bfb6-dfa9e93a4d4b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:27:58 functional-753440 crio[2938]: time="2025-10-09T18:27:58.571368081Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-753440_kube-system_ddd5b817e547272bbbe5e6f0c16b8e98_0" id=0680e5b7-4641-42be-bfb6-dfa9e93a4d4b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:00 functional-753440 crio[2938]: time="2025-10-09T18:28:00.542891227Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=1f1258bf-421d-4688-b323-1fa5c359ad07 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:28:00 functional-753440 crio[2938]: time="2025-10-09T18:28:00.543963575Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=d89f14c8-0567-4ffc-93dc-1010587b7efb name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:28:00 functional-753440 crio[2938]: time="2025-10-09T18:28:00.545202698Z" level=info msg="Creating container: kube-system/etcd-functional-753440/etcd" id=2a6ae148-b613-4860-bdc1-e184df617eb6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:00 functional-753440 crio[2938]: time="2025-10-09T18:28:00.545739676Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:28:00 functional-753440 crio[2938]: time="2025-10-09T18:28:00.551198324Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:28:00 functional-753440 crio[2938]: time="2025-10-09T18:28:00.5516209Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:28:00 functional-753440 crio[2938]: time="2025-10-09T18:28:00.573070804Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=2a6ae148-b613-4860-bdc1-e184df617eb6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:00 functional-753440 crio[2938]: time="2025-10-09T18:28:00.574711439Z" level=info msg="createCtr: deleting container ID 31d7052f51448ab4cb31450be8c20e284409f85b31edc43d374b6e4c387c6694 from idIndex" id=2a6ae148-b613-4860-bdc1-e184df617eb6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:00 functional-753440 crio[2938]: time="2025-10-09T18:28:00.574748009Z" level=info msg="createCtr: removing container 31d7052f51448ab4cb31450be8c20e284409f85b31edc43d374b6e4c387c6694" id=2a6ae148-b613-4860-bdc1-e184df617eb6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:00 functional-753440 crio[2938]: time="2025-10-09T18:28:00.574780064Z" level=info msg="createCtr: deleting container 31d7052f51448ab4cb31450be8c20e284409f85b31edc43d374b6e4c387c6694 from storage" id=2a6ae148-b613-4860-bdc1-e184df617eb6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:00 functional-753440 crio[2938]: time="2025-10-09T18:28:00.576778871Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-753440_kube-system_894f77eb6f96f2cc2bf4bdca611e7cdb_0" id=2a6ae148-b613-4860-bdc1-e184df617eb6 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:28:01.074987    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:28:01.075696    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:28:01.077479    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:28:01.077983    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:28:01.079780    4316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:28:01 up  1:10,  0 user,  load average: 0.00, 0.07, 0.09
	Linux functional-753440 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 18:27:53 functional-753440 kubelet[1796]:         container kube-apiserver start failed in pod kube-apiserver-functional-753440_kube-system(d8200e5d2f7672a0974c7d953c472e15): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:27:53 functional-753440 kubelet[1796]:  > logger="UnhandledError"
	Oct 09 18:27:53 functional-753440 kubelet[1796]: E1009 18:27:53.568942    1796 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-753440" podUID="d8200e5d2f7672a0974c7d953c472e15"
	Oct 09 18:27:53 functional-753440 kubelet[1796]: E1009 18:27:53.583083    1796 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-753440\" not found"
	Oct 09 18:27:54 functional-753440 kubelet[1796]: E1009 18:27:54.225983    1796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-753440?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 09 18:27:54 functional-753440 kubelet[1796]: I1009 18:27:54.424037    1796 kubelet_node_status.go:75] "Attempting to register node" node="functional-753440"
	Oct 09 18:27:54 functional-753440 kubelet[1796]: E1009 18:27:54.424450    1796 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-753440"
	Oct 09 18:27:55 functional-753440 kubelet[1796]: E1009 18:27:55.901999    1796 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 09 18:27:57 functional-753440 kubelet[1796]: E1009 18:27:57.053904    1796 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-753440.186ce57ba0b4bd78\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-753440.186ce57ba0b4bd78  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-753440,UID:functional-753440,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-753440 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-753440,},FirstTimestamp:2025-10-09 18:17:53.534958968 +0000 UTC m=+0.381579824,LastTimestamp:2025-10-09 18:17:53.536403063 +0000 UTC m=+0.383023919,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Reporting
Instance:functional-753440,}"
	Oct 09 18:27:58 functional-753440 kubelet[1796]: E1009 18:27:58.542272    1796 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753440\" not found" node="functional-753440"
	Oct 09 18:27:58 functional-753440 kubelet[1796]: E1009 18:27:58.571686    1796 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:27:58 functional-753440 kubelet[1796]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:27:58 functional-753440 kubelet[1796]:  > podSandboxID="a0f669ac9226ee4ac7b841aacfe05ece4235d10b02fe7bb351eab32cadb9e24d"
	Oct 09 18:27:58 functional-753440 kubelet[1796]: E1009 18:27:58.571796    1796 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:27:58 functional-753440 kubelet[1796]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-753440_kube-system(ddd5b817e547272bbbe5e6f0c16b8e98): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:27:58 functional-753440 kubelet[1796]:  > logger="UnhandledError"
	Oct 09 18:27:58 functional-753440 kubelet[1796]: E1009 18:27:58.571834    1796 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-753440" podUID="ddd5b817e547272bbbe5e6f0c16b8e98"
	Oct 09 18:28:00 functional-753440 kubelet[1796]: E1009 18:28:00.542411    1796 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753440\" not found" node="functional-753440"
	Oct 09 18:28:00 functional-753440 kubelet[1796]: E1009 18:28:00.577097    1796 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:28:00 functional-753440 kubelet[1796]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:28:00 functional-753440 kubelet[1796]:  > podSandboxID="b2bb9a720dde4343bb6d68e21981701423cf9ba8fc536a4b16c3a5d7282c9e5b"
	Oct 09 18:28:00 functional-753440 kubelet[1796]: E1009 18:28:00.577210    1796 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:28:00 functional-753440 kubelet[1796]:         container etcd start failed in pod etcd-functional-753440_kube-system(894f77eb6f96f2cc2bf4bdca611e7cdb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:28:00 functional-753440 kubelet[1796]:  > logger="UnhandledError"
	Oct 09 18:28:00 functional-753440 kubelet[1796]: E1009 18:28:00.577254    1796 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-753440" podUID="894f77eb6f96f2cc2bf4bdca611e7cdb"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753440 -n functional-753440
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753440 -n functional-753440: exit status 2 (314.595177ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753440" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (366.58s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (2.19s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-753440 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-753440 get po -A: exit status 1 (58.235479ms)

                                                
                                                
** stderr ** 
	E1009 18:28:02.040633   38398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:28:02.041292   38398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:28:02.042811   38398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:28:02.043178   38398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:28:02.044582   38398 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-753440 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"E1009 18:28:02.040633   38398 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1009 18:28:02.041292   38398 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1009 18:28:02.042811   38398 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1009 18:28:02.043178   38398 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1009 18:28:02.044582   38398 memc
ache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nThe connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-753440 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-753440 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753440
helpers_test.go:243: (dbg) docker inspect functional-753440:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205",
	        "Created": "2025-10-09T18:13:38.612842612Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 29511,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:13:38.64668907Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/hostname",
	        "HostsPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/hosts",
	        "LogPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205-json.log",
	        "Name": "/functional-753440",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753440:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-753440",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205",
	                "LowerDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753440",
	                "Source": "/var/lib/docker/volumes/functional-753440/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753440",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753440",
	                "name.minikube.sigs.k8s.io": "functional-753440",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d81e656cb7fd298b6be7b84ddafb7e6d0b2df1b9904e1c444b24eb780385409d",
	            "SandboxKey": "/var/run/docker/netns/d81e656cb7fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753440": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:52:a9:f3:ce:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d69cee380b2506f35d197ee18a95b90b110e191b547e1220873c5484ffc92ad3",
	                    "EndpointID": "2f780bc31b7359d4036c8b32e09c7f7657923ca8c46e8392506706282465c3ec",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753440",
	                        "694bf539948e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-753440 -n functional-753440
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-753440 -n functional-753440: exit status 2 (293.636604ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 logs -n 25
helpers_test.go:260: TestFunctional/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-240600                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-240600   │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │ 09 Oct 25 17:56 UTC │
	│ start   │ --download-only -p download-docker-360662 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-360662 │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │                     │
	│ delete  │ -p download-docker-360662                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-360662 │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │ 09 Oct 25 17:56 UTC │
	│ start   │ --download-only -p binary-mirror-609906 --alsologtostderr --binary-mirror http://127.0.0.1:44531 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-609906   │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │                     │
	│ delete  │ -p binary-mirror-609906                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-609906   │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │ 09 Oct 25 17:56 UTC │
	│ addons  │ enable dashboard -p addons-246638                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-246638          │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │                     │
	│ addons  │ disable dashboard -p addons-246638                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-246638          │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │                     │
	│ start   │ -p addons-246638 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-246638          │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │                     │
	│ delete  │ -p addons-246638                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-246638          │ jenkins │ v1.37.0 │ 09 Oct 25 18:04 UTC │ 09 Oct 25 18:05 UTC │
	│ start   │ -p nospam-663194 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-663194 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                  │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:05 UTC │                     │
	│ start   │ nospam-663194 --log_dir /tmp/nospam-663194 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │                     │
	│ start   │ nospam-663194 --log_dir /tmp/nospam-663194 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │                     │
	│ start   │ nospam-663194 --log_dir /tmp/nospam-663194 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │                     │
	│ pause   │ nospam-663194 --log_dir /tmp/nospam-663194 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ pause   │ nospam-663194 --log_dir /tmp/nospam-663194 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ pause   │ nospam-663194 --log_dir /tmp/nospam-663194 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ unpause │ nospam-663194 --log_dir /tmp/nospam-663194 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ unpause │ nospam-663194 --log_dir /tmp/nospam-663194 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ unpause │ nospam-663194 --log_dir /tmp/nospam-663194 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ stop    │ nospam-663194 --log_dir /tmp/nospam-663194 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ stop    │ nospam-663194 --log_dir /tmp/nospam-663194 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ stop    │ nospam-663194 --log_dir /tmp/nospam-663194 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ delete  │ -p nospam-663194                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-663194          │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ start   │ -p functional-753440 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                            │ functional-753440      │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │                     │
	│ start   │ -p functional-753440 --alsologtostderr -v=8                                                                                                                                                                                                                                                                                                                                                                                                                              │ functional-753440      │ jenkins │ v1.37.0 │ 09 Oct 25 18:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:21:55
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:21:55.407242   34792 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:21:55.407482   34792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:21:55.407490   34792 out.go:374] Setting ErrFile to fd 2...
	I1009 18:21:55.407494   34792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:21:55.407669   34792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:21:55.408109   34792 out.go:368] Setting JSON to false
	I1009 18:21:55.408948   34792 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3863,"bootTime":1760030252,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:21:55.409029   34792 start.go:141] virtualization: kvm guest
	I1009 18:21:55.411208   34792 out.go:179] * [functional-753440] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:21:55.412706   34792 notify.go:220] Checking for updates...
	I1009 18:21:55.412728   34792 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:21:55.414107   34792 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:21:55.415609   34792 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:21:55.417005   34792 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:21:55.418411   34792 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:21:55.419884   34792 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:21:55.421538   34792 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:21:55.421658   34792 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:21:55.445068   34792 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:21:55.445204   34792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:21:55.504624   34792 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:21:55.494450296 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:21:55.504746   34792 docker.go:318] overlay module found
	I1009 18:21:55.507261   34792 out.go:179] * Using the docker driver based on existing profile
	I1009 18:21:55.508504   34792 start.go:305] selected driver: docker
	I1009 18:21:55.508518   34792 start.go:925] validating driver "docker" against &{Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:21:55.508594   34792 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:21:55.508665   34792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:21:55.566793   34792 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:21:55.557358643 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:21:55.567631   34792 cni.go:84] Creating CNI manager for ""
	I1009 18:21:55.567714   34792 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:21:55.567780   34792 start.go:349] cluster config:
	{Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:21:55.569913   34792 out.go:179] * Starting "functional-753440" primary control-plane node in "functional-753440" cluster
	I1009 18:21:55.571250   34792 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:21:55.572672   34792 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:21:55.573890   34792 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:21:55.573921   34792 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:21:55.573933   34792 cache.go:64] Caching tarball of preloaded images
	I1009 18:21:55.573992   34792 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:21:55.574016   34792 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:21:55.574025   34792 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:21:55.574109   34792 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/config.json ...
	I1009 18:21:55.593603   34792 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 18:21:55.593631   34792 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 18:21:55.593646   34792 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:21:55.593672   34792 start.go:360] acquireMachinesLock for functional-753440: {Name:mka6dd10318522f9d68a16550e4b04812fa22004 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:21:55.593732   34792 start.go:364] duration metric: took 38.489µs to acquireMachinesLock for "functional-753440"
	I1009 18:21:55.593749   34792 start.go:96] Skipping create...Using existing machine configuration
	I1009 18:21:55.593758   34792 fix.go:54] fixHost starting: 
	I1009 18:21:55.593970   34792 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
	I1009 18:21:55.610925   34792 fix.go:112] recreateIfNeeded on functional-753440: state=Running err=<nil>
	W1009 18:21:55.610951   34792 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 18:21:55.612681   34792 out.go:252] * Updating the running docker "functional-753440" container ...
	I1009 18:21:55.612704   34792 machine.go:93] provisionDockerMachine start ...
	I1009 18:21:55.612764   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:55.630174   34792 main.go:141] libmachine: Using SSH client type: native
	I1009 18:21:55.630389   34792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:21:55.630401   34792 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:21:55.773949   34792 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753440
	
	I1009 18:21:55.773975   34792 ubuntu.go:182] provisioning hostname "functional-753440"
	I1009 18:21:55.774031   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:55.792726   34792 main.go:141] libmachine: Using SSH client type: native
	I1009 18:21:55.792949   34792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:21:55.792962   34792 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-753440 && echo "functional-753440" | sudo tee /etc/hostname
	I1009 18:21:55.945969   34792 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753440
	
	I1009 18:21:55.946040   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:55.963600   34792 main.go:141] libmachine: Using SSH client type: native
	I1009 18:21:55.963821   34792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:21:55.963839   34792 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753440' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753440/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753440' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:21:56.108677   34792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:21:56.108700   34792 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 18:21:56.108717   34792 ubuntu.go:190] setting up certificates
	I1009 18:21:56.108727   34792 provision.go:84] configureAuth start
	I1009 18:21:56.108783   34792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753440
	I1009 18:21:56.127107   34792 provision.go:143] copyHostCerts
	I1009 18:21:56.127166   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:21:56.127197   34792 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 18:21:56.127212   34792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:21:56.127290   34792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 18:21:56.127394   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:21:56.127416   34792 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 18:21:56.127420   34792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:21:56.127449   34792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 18:21:56.127507   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:21:56.127523   34792 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 18:21:56.127526   34792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:21:56.127549   34792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 18:21:56.127598   34792 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.functional-753440 san=[127.0.0.1 192.168.49.2 functional-753440 localhost minikube]
	I1009 18:21:56.380428   34792 provision.go:177] copyRemoteCerts
	I1009 18:21:56.380482   34792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:21:56.380515   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:56.398054   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:56.500395   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 18:21:56.500448   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 18:21:56.517603   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 18:21:56.517655   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 18:21:56.534349   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 18:21:56.534397   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 18:21:56.551305   34792 provision.go:87] duration metric: took 442.551304ms to configureAuth
	I1009 18:21:56.551330   34792 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:21:56.551498   34792 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:21:56.551579   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:56.568651   34792 main.go:141] libmachine: Using SSH client type: native
	I1009 18:21:56.568866   34792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:21:56.568881   34792 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:21:56.838390   34792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:21:56.838414   34792 machine.go:96] duration metric: took 1.225703269s to provisionDockerMachine
	I1009 18:21:56.838426   34792 start.go:293] postStartSetup for "functional-753440" (driver="docker")
	I1009 18:21:56.838437   34792 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:21:56.838510   34792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:21:56.838559   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:56.856450   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:56.959658   34792 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:21:56.963119   34792 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1009 18:21:56.963150   34792 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1009 18:21:56.963158   34792 command_runner.go:130] > VERSION_ID="12"
	I1009 18:21:56.963165   34792 command_runner.go:130] > VERSION="12 (bookworm)"
	I1009 18:21:56.963174   34792 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1009 18:21:56.963179   34792 command_runner.go:130] > ID=debian
	I1009 18:21:56.963186   34792 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1009 18:21:56.963194   34792 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1009 18:21:56.963212   34792 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1009 18:21:56.963315   34792 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:21:56.963334   34792 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:21:56.963342   34792 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 18:21:56.963382   34792 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 18:21:56.963448   34792 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 18:21:56.963463   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /etc/ssl/certs/148802.pem
	I1009 18:21:56.963529   34792 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/test/nested/copy/14880/hosts -> hosts in /etc/test/nested/copy/14880
	I1009 18:21:56.963535   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/test/nested/copy/14880/hosts -> /etc/test/nested/copy/14880/hosts
	I1009 18:21:56.963565   34792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/14880
	I1009 18:21:56.970888   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:21:56.988730   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/test/nested/copy/14880/hosts --> /etc/test/nested/copy/14880/hosts (40 bytes)
	I1009 18:21:57.005907   34792 start.go:296] duration metric: took 167.469505ms for postStartSetup
	I1009 18:21:57.005971   34792 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:21:57.006025   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:57.023806   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:57.123166   34792 command_runner.go:130] > 39%
	I1009 18:21:57.123235   34792 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:21:57.127917   34792 command_runner.go:130] > 179G
	I1009 18:21:57.127948   34792 fix.go:56] duration metric: took 1.534189396s for fixHost
	I1009 18:21:57.127960   34792 start.go:83] releasing machines lock for "functional-753440", held for 1.534218366s
	I1009 18:21:57.128034   34792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753440
	I1009 18:21:57.145978   34792 ssh_runner.go:195] Run: cat /version.json
	I1009 18:21:57.146019   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:57.146063   34792 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:21:57.146159   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:57.164302   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:57.164547   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:57.263542   34792 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759745255-21703", "minikube_version": "v1.37.0", "commit": "a51fe4b7ffc88febd8814e8831f38772e976d097"}
	I1009 18:21:57.263690   34792 ssh_runner.go:195] Run: systemctl --version
	I1009 18:21:57.316955   34792 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1009 18:21:57.317002   34792 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1009 18:21:57.317022   34792 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1009 18:21:57.317074   34792 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:21:57.353021   34792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 18:21:57.357737   34792 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1009 18:21:57.357788   34792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:21:57.357834   34792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:21:57.365811   34792 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 18:21:57.365833   34792 start.go:495] detecting cgroup driver to use...
	I1009 18:21:57.365861   34792 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:21:57.365903   34792 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:21:57.380237   34792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:21:57.392796   34792 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:21:57.392859   34792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:21:57.407315   34792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:21:57.419892   34792 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:21:57.506572   34792 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:21:57.589596   34792 docker.go:234] disabling docker service ...
	I1009 18:21:57.589673   34792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:21:57.603725   34792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:21:57.615780   34792 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:21:57.696218   34792 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:21:57.781915   34792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:21:57.794534   34792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:21:57.808497   34792 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1009 18:21:57.808534   34792 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:21:57.808589   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.817764   34792 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 18:21:57.817814   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.827115   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.836066   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.844563   34792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:21:57.852458   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.861227   34792 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.869900   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.878917   34792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:21:57.886570   34792 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1009 18:21:57.886644   34792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:21:57.894517   34792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:21:57.979064   34792 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:21:58.090717   34792 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:21:58.090783   34792 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:21:58.095044   34792 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1009 18:21:58.095068   34792 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1009 18:21:58.095074   34792 command_runner.go:130] > Device: 0,59	Inode: 3803        Links: 1
	I1009 18:21:58.095080   34792 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 18:21:58.095085   34792 command_runner.go:130] > Access: 2025-10-09 18:21:58.072690390 +0000
	I1009 18:21:58.095093   34792 command_runner.go:130] > Modify: 2025-10-09 18:21:58.072690390 +0000
	I1009 18:21:58.095101   34792 command_runner.go:130] > Change: 2025-10-09 18:21:58.072690390 +0000
	I1009 18:21:58.095108   34792 command_runner.go:130] >  Birth: 2025-10-09 18:21:58.072690390 +0000
	I1009 18:21:58.095130   34792 start.go:563] Will wait 60s for crictl version
	I1009 18:21:58.095214   34792 ssh_runner.go:195] Run: which crictl
	I1009 18:21:58.099101   34792 command_runner.go:130] > /usr/local/bin/crictl
	I1009 18:21:58.099187   34792 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:21:58.122816   34792 command_runner.go:130] > Version:  0.1.0
	I1009 18:21:58.122840   34792 command_runner.go:130] > RuntimeName:  cri-o
	I1009 18:21:58.122845   34792 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1009 18:21:58.122850   34792 command_runner.go:130] > RuntimeApiVersion:  v1
	I1009 18:21:58.122867   34792 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:21:58.122920   34792 ssh_runner.go:195] Run: crio --version
	I1009 18:21:58.149899   34792 command_runner.go:130] > crio version 1.34.1
	I1009 18:21:58.149922   34792 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1009 18:21:58.149928   34792 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1009 18:21:58.149933   34792 command_runner.go:130] >    GitTreeState:   dirty
	I1009 18:21:58.149944   34792 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1009 18:21:58.149949   34792 command_runner.go:130] >    GoVersion:      go1.24.6
	I1009 18:21:58.149952   34792 command_runner.go:130] >    Compiler:       gc
	I1009 18:21:58.149957   34792 command_runner.go:130] >    Platform:       linux/amd64
	I1009 18:21:58.149961   34792 command_runner.go:130] >    Linkmode:       static
	I1009 18:21:58.149964   34792 command_runner.go:130] >    BuildTags:
	I1009 18:21:58.149967   34792 command_runner.go:130] >      static
	I1009 18:21:58.149971   34792 command_runner.go:130] >      netgo
	I1009 18:21:58.149975   34792 command_runner.go:130] >      osusergo
	I1009 18:21:58.149978   34792 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1009 18:21:58.149982   34792 command_runner.go:130] >      seccomp
	I1009 18:21:58.149988   34792 command_runner.go:130] >      apparmor
	I1009 18:21:58.149991   34792 command_runner.go:130] >      selinux
	I1009 18:21:58.149998   34792 command_runner.go:130] >    LDFlags:          unknown
	I1009 18:21:58.150002   34792 command_runner.go:130] >    SeccompEnabled:   true
	I1009 18:21:58.150007   34792 command_runner.go:130] >    AppArmorEnabled:  false
	I1009 18:21:58.151351   34792 ssh_runner.go:195] Run: crio --version
	I1009 18:21:58.178662   34792 command_runner.go:130] > crio version 1.34.1
	I1009 18:21:58.178683   34792 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1009 18:21:58.178689   34792 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1009 18:21:58.178693   34792 command_runner.go:130] >    GitTreeState:   dirty
	I1009 18:21:58.178698   34792 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1009 18:21:58.178702   34792 command_runner.go:130] >    GoVersion:      go1.24.6
	I1009 18:21:58.178706   34792 command_runner.go:130] >    Compiler:       gc
	I1009 18:21:58.178714   34792 command_runner.go:130] >    Platform:       linux/amd64
	I1009 18:21:58.178718   34792 command_runner.go:130] >    Linkmode:       static
	I1009 18:21:58.178721   34792 command_runner.go:130] >    BuildTags:
	I1009 18:21:58.178724   34792 command_runner.go:130] >      static
	I1009 18:21:58.178728   34792 command_runner.go:130] >      netgo
	I1009 18:21:58.178732   34792 command_runner.go:130] >      osusergo
	I1009 18:21:58.178735   34792 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1009 18:21:58.178739   34792 command_runner.go:130] >      seccomp
	I1009 18:21:58.178742   34792 command_runner.go:130] >      apparmor
	I1009 18:21:58.178757   34792 command_runner.go:130] >      selinux
	I1009 18:21:58.178764   34792 command_runner.go:130] >    LDFlags:          unknown
	I1009 18:21:58.178768   34792 command_runner.go:130] >    SeccompEnabled:   true
	I1009 18:21:58.178771   34792 command_runner.go:130] >    AppArmorEnabled:  false
	I1009 18:21:58.181232   34792 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:21:58.182844   34792 cli_runner.go:164] Run: docker network inspect functional-753440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:21:58.200852   34792 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:21:58.205024   34792 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1009 18:21:58.205096   34792 kubeadm.go:883] updating cluster {Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:21:58.205232   34792 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:21:58.205276   34792 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:21:58.234303   34792 command_runner.go:130] > {
	I1009 18:21:58.234338   34792 command_runner.go:130] >   "images":  [
	I1009 18:21:58.234345   34792 command_runner.go:130] >     {
	I1009 18:21:58.234355   34792 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1009 18:21:58.234362   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.234369   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1009 18:21:58.234373   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234378   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.234388   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1009 18:21:58.234400   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1009 18:21:58.234409   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234417   34792 command_runner.go:130] >       "size":  "109379124",
	I1009 18:21:58.234426   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.234435   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.234443   34792 command_runner.go:130] >     },
	I1009 18:21:58.234449   34792 command_runner.go:130] >     {
	I1009 18:21:58.234460   34792 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1009 18:21:58.234468   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.234478   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1009 18:21:58.234486   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234494   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.234509   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1009 18:21:58.234523   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1009 18:21:58.234532   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234539   34792 command_runner.go:130] >       "size":  "31470524",
	I1009 18:21:58.234548   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.234565   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.234581   34792 command_runner.go:130] >     },
	I1009 18:21:58.234590   34792 command_runner.go:130] >     {
	I1009 18:21:58.234600   34792 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1009 18:21:58.234610   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.234619   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1009 18:21:58.234627   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234635   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.234649   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1009 18:21:58.234665   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1009 18:21:58.234673   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234680   34792 command_runner.go:130] >       "size":  "76103547",
	I1009 18:21:58.234689   34792 command_runner.go:130] >       "username":  "nonroot",
	I1009 18:21:58.234697   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.234713   34792 command_runner.go:130] >     },
	I1009 18:21:58.234721   34792 command_runner.go:130] >     {
	I1009 18:21:58.234731   34792 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1009 18:21:58.234740   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.234749   34792 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1009 18:21:58.234757   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234765   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.234780   34792 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1009 18:21:58.234794   34792 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1009 18:21:58.234802   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234809   34792 command_runner.go:130] >       "size":  "195976448",
	I1009 18:21:58.234817   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.234824   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.234833   34792 command_runner.go:130] >       },
	I1009 18:21:58.234849   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.234858   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.234864   34792 command_runner.go:130] >     },
	I1009 18:21:58.234871   34792 command_runner.go:130] >     {
	I1009 18:21:58.234882   34792 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1009 18:21:58.234891   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.234906   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1009 18:21:58.234914   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234921   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.234936   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1009 18:21:58.234952   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1009 18:21:58.234960   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234967   34792 command_runner.go:130] >       "size":  "89046001",
	I1009 18:21:58.234976   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.234984   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.234991   34792 command_runner.go:130] >       },
	I1009 18:21:58.234999   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.235008   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.235015   34792 command_runner.go:130] >     },
	I1009 18:21:58.235023   34792 command_runner.go:130] >     {
	I1009 18:21:58.235033   34792 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1009 18:21:58.235042   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.235052   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1009 18:21:58.235059   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235065   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.235078   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1009 18:21:58.235098   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1009 18:21:58.235106   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235113   34792 command_runner.go:130] >       "size":  "76004181",
	I1009 18:21:58.235122   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.235130   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.235152   34792 command_runner.go:130] >       },
	I1009 18:21:58.235159   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.235168   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.235174   34792 command_runner.go:130] >     },
	I1009 18:21:58.235183   34792 command_runner.go:130] >     {
	I1009 18:21:58.235193   34792 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1009 18:21:58.235202   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.235211   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1009 18:21:58.235227   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235236   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.235248   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1009 18:21:58.235262   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1009 18:21:58.235271   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235278   34792 command_runner.go:130] >       "size":  "73138073",
	I1009 18:21:58.235286   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.235294   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.235302   34792 command_runner.go:130] >     },
	I1009 18:21:58.235314   34792 command_runner.go:130] >     {
	I1009 18:21:58.235326   34792 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1009 18:21:58.235333   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.235344   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1009 18:21:58.235352   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235359   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.235373   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1009 18:21:58.235408   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1009 18:21:58.235416   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235424   34792 command_runner.go:130] >       "size":  "53844823",
	I1009 18:21:58.235433   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.235441   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.235450   34792 command_runner.go:130] >       },
	I1009 18:21:58.235456   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.235464   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.235470   34792 command_runner.go:130] >     },
	I1009 18:21:58.235477   34792 command_runner.go:130] >     {
	I1009 18:21:58.235488   34792 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1009 18:21:58.235496   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.235508   34792 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1009 18:21:58.235515   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235522   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.235536   34792 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1009 18:21:58.235550   34792 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1009 18:21:58.235566   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235576   34792 command_runner.go:130] >       "size":  "742092",
	I1009 18:21:58.235582   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.235592   34792 command_runner.go:130] >         "value":  "65535"
	I1009 18:21:58.235599   34792 command_runner.go:130] >       },
	I1009 18:21:58.235606   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.235615   34792 command_runner.go:130] >       "pinned":  true
	I1009 18:21:58.235621   34792 command_runner.go:130] >     }
	I1009 18:21:58.235627   34792 command_runner.go:130] >   ]
	I1009 18:21:58.235633   34792 command_runner.go:130] > }
	I1009 18:21:58.236008   34792 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:21:58.236027   34792 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:21:58.236090   34792 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:21:58.260405   34792 command_runner.go:130] > {
	I1009 18:21:58.260434   34792 command_runner.go:130] >   "images":  [
	I1009 18:21:58.260440   34792 command_runner.go:130] >     {
	I1009 18:21:58.260454   34792 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1009 18:21:58.260464   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.260473   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1009 18:21:58.260483   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260490   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.260505   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1009 18:21:58.260520   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1009 18:21:58.260529   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260540   34792 command_runner.go:130] >       "size":  "109379124",
	I1009 18:21:58.260550   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.260560   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.260566   34792 command_runner.go:130] >     },
	I1009 18:21:58.260575   34792 command_runner.go:130] >     {
	I1009 18:21:58.260586   34792 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1009 18:21:58.260593   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.260606   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1009 18:21:58.260615   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260624   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.260639   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1009 18:21:58.260653   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1009 18:21:58.260661   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260667   34792 command_runner.go:130] >       "size":  "31470524",
	I1009 18:21:58.260674   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.260681   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.260689   34792 command_runner.go:130] >     },
	I1009 18:21:58.260698   34792 command_runner.go:130] >     {
	I1009 18:21:58.260711   34792 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1009 18:21:58.260721   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.260732   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1009 18:21:58.260740   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260746   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.260759   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1009 18:21:58.260769   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1009 18:21:58.260777   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260785   34792 command_runner.go:130] >       "size":  "76103547",
	I1009 18:21:58.260794   34792 command_runner.go:130] >       "username":  "nonroot",
	I1009 18:21:58.260804   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.260812   34792 command_runner.go:130] >     },
	I1009 18:21:58.260817   34792 command_runner.go:130] >     {
	I1009 18:21:58.260829   34792 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1009 18:21:58.260838   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.260848   34792 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1009 18:21:58.260854   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260861   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.260876   34792 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1009 18:21:58.260890   34792 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1009 18:21:58.260897   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260904   34792 command_runner.go:130] >       "size":  "195976448",
	I1009 18:21:58.260914   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.260923   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.260931   34792 command_runner.go:130] >       },
	I1009 18:21:58.260939   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.260949   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.260957   34792 command_runner.go:130] >     },
	I1009 18:21:58.260965   34792 command_runner.go:130] >     {
	I1009 18:21:58.260974   34792 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1009 18:21:58.260984   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.260992   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1009 18:21:58.261000   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261007   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.261018   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1009 18:21:58.261032   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1009 18:21:58.261040   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261047   34792 command_runner.go:130] >       "size":  "89046001",
	I1009 18:21:58.261056   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.261066   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.261073   34792 command_runner.go:130] >       },
	I1009 18:21:58.261083   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.261093   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.261101   34792 command_runner.go:130] >     },
	I1009 18:21:58.261107   34792 command_runner.go:130] >     {
	I1009 18:21:58.261119   34792 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1009 18:21:58.261128   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.261153   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1009 18:21:58.261159   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261169   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.261181   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1009 18:21:58.261196   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1009 18:21:58.261205   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261214   34792 command_runner.go:130] >       "size":  "76004181",
	I1009 18:21:58.261223   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.261234   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.261243   34792 command_runner.go:130] >       },
	I1009 18:21:58.261249   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.261258   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.261266   34792 command_runner.go:130] >     },
	I1009 18:21:58.261270   34792 command_runner.go:130] >     {
	I1009 18:21:58.261283   34792 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1009 18:21:58.261295   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.261306   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1009 18:21:58.261314   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261321   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.261334   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1009 18:21:58.261349   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1009 18:21:58.261356   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261364   34792 command_runner.go:130] >       "size":  "73138073",
	I1009 18:21:58.261372   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.261379   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.261384   34792 command_runner.go:130] >     },
	I1009 18:21:58.261393   34792 command_runner.go:130] >     {
	I1009 18:21:58.261402   34792 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1009 18:21:58.261409   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.261417   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1009 18:21:58.261422   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261428   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.261439   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1009 18:21:58.261460   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1009 18:21:58.261467   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261473   34792 command_runner.go:130] >       "size":  "53844823",
	I1009 18:21:58.261482   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.261491   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.261498   34792 command_runner.go:130] >       },
	I1009 18:21:58.261507   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.261516   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.261525   34792 command_runner.go:130] >     },
	I1009 18:21:58.261533   34792 command_runner.go:130] >     {
	I1009 18:21:58.261543   34792 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1009 18:21:58.261549   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.261555   34792 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1009 18:21:58.261563   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261570   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.261584   34792 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1009 18:21:58.261597   34792 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1009 18:21:58.261607   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261614   34792 command_runner.go:130] >       "size":  "742092",
	I1009 18:21:58.261620   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.261626   34792 command_runner.go:130] >         "value":  "65535"
	I1009 18:21:58.261632   34792 command_runner.go:130] >       },
	I1009 18:21:58.261636   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.261641   34792 command_runner.go:130] >       "pinned":  true
	I1009 18:21:58.261649   34792 command_runner.go:130] >     }
	I1009 18:21:58.261655   34792 command_runner.go:130] >   ]
	I1009 18:21:58.261663   34792 command_runner.go:130] > }
	I1009 18:21:58.262011   34792 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:21:58.262027   34792 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:21:58.262034   34792 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1009 18:21:58.262124   34792 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-753440 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:21:58.262213   34792 ssh_runner.go:195] Run: crio config
	I1009 18:21:58.302300   34792 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1009 18:21:58.302331   34792 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1009 18:21:58.302340   34792 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1009 18:21:58.302345   34792 command_runner.go:130] > #
	I1009 18:21:58.302356   34792 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1009 18:21:58.302365   34792 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1009 18:21:58.302374   34792 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1009 18:21:58.302388   34792 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1009 18:21:58.302395   34792 command_runner.go:130] > # reload'.
	I1009 18:21:58.302413   34792 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1009 18:21:58.302424   34792 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1009 18:21:58.302434   34792 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1009 18:21:58.302446   34792 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1009 18:21:58.302451   34792 command_runner.go:130] > [crio]
	I1009 18:21:58.302460   34792 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1009 18:21:58.302491   34792 command_runner.go:130] > # containers images, in this directory.
	I1009 18:21:58.302515   34792 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1009 18:21:58.302526   34792 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1009 18:21:58.302534   34792 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1009 18:21:58.302549   34792 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1009 18:21:58.302558   34792 command_runner.go:130] > # imagestore = ""
	I1009 18:21:58.302569   34792 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1009 18:21:58.302588   34792 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1009 18:21:58.302596   34792 command_runner.go:130] > # storage_driver = "overlay"
	I1009 18:21:58.302604   34792 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1009 18:21:58.302618   34792 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1009 18:21:58.302625   34792 command_runner.go:130] > # storage_option = [
	I1009 18:21:58.302630   34792 command_runner.go:130] > # ]
	I1009 18:21:58.302640   34792 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1009 18:21:58.302649   34792 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1009 18:21:58.302660   34792 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1009 18:21:58.302668   34792 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1009 18:21:58.302681   34792 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1009 18:21:58.302689   34792 command_runner.go:130] > # always happen on a node reboot
	I1009 18:21:58.302700   34792 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1009 18:21:58.302714   34792 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1009 18:21:58.302727   34792 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1009 18:21:58.302738   34792 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1009 18:21:58.302745   34792 command_runner.go:130] > # version_file_persist = ""
	I1009 18:21:58.302760   34792 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1009 18:21:58.302779   34792 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1009 18:21:58.302786   34792 command_runner.go:130] > # internal_wipe = true
	I1009 18:21:58.302800   34792 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1009 18:21:58.302809   34792 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1009 18:21:58.302823   34792 command_runner.go:130] > # internal_repair = true
	I1009 18:21:58.302832   34792 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1009 18:21:58.302841   34792 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1009 18:21:58.302850   34792 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1009 18:21:58.302858   34792 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1009 18:21:58.302871   34792 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1009 18:21:58.302877   34792 command_runner.go:130] > [crio.api]
	I1009 18:21:58.302889   34792 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1009 18:21:58.302895   34792 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1009 18:21:58.302903   34792 command_runner.go:130] > # IP address on which the stream server will listen.
	I1009 18:21:58.302908   34792 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1009 18:21:58.302918   34792 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1009 18:21:58.302922   34792 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1009 18:21:58.302928   34792 command_runner.go:130] > # stream_port = "0"
	I1009 18:21:58.302935   34792 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1009 18:21:58.302943   34792 command_runner.go:130] > # stream_enable_tls = false
	I1009 18:21:58.302953   34792 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1009 18:21:58.302963   34792 command_runner.go:130] > # stream_idle_timeout = ""
	I1009 18:21:58.302972   34792 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1009 18:21:58.302984   34792 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1009 18:21:58.303003   34792 command_runner.go:130] > # stream_tls_cert = ""
	I1009 18:21:58.303014   34792 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1009 18:21:58.303019   34792 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1009 18:21:58.303024   34792 command_runner.go:130] > # stream_tls_key = ""
	I1009 18:21:58.303031   34792 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1009 18:21:58.303041   34792 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1009 18:21:58.303054   34792 command_runner.go:130] > # automatically pick up the changes.
	I1009 18:21:58.303061   34792 command_runner.go:130] > # stream_tls_ca = ""
	I1009 18:21:58.303083   34792 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1009 18:21:58.303094   34792 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1009 18:21:58.303103   34792 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1009 18:21:58.303111   34792 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1009 18:21:58.303120   34792 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1009 18:21:58.303130   34792 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1009 18:21:58.303156   34792 command_runner.go:130] > [crio.runtime]
	I1009 18:21:58.303167   34792 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1009 18:21:58.303176   34792 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1009 18:21:58.303182   34792 command_runner.go:130] > # "nofile=1024:2048"
	I1009 18:21:58.303192   34792 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1009 18:21:58.303201   34792 command_runner.go:130] > # default_ulimits = [
	I1009 18:21:58.303207   34792 command_runner.go:130] > # ]
	I1009 18:21:58.303219   34792 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1009 18:21:58.303225   34792 command_runner.go:130] > # no_pivot = false
	I1009 18:21:58.303234   34792 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1009 18:21:58.303261   34792 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1009 18:21:58.303272   34792 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1009 18:21:58.303282   34792 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1009 18:21:58.303294   34792 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1009 18:21:58.303307   34792 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1009 18:21:58.303315   34792 command_runner.go:130] > # conmon = ""
	I1009 18:21:58.303321   34792 command_runner.go:130] > # Cgroup setting for conmon
	I1009 18:21:58.303330   34792 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1009 18:21:58.303336   34792 command_runner.go:130] > conmon_cgroup = "pod"
	I1009 18:21:58.303344   34792 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1009 18:21:58.303351   34792 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1009 18:21:58.303361   34792 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1009 18:21:58.303366   34792 command_runner.go:130] > # conmon_env = [
	I1009 18:21:58.303370   34792 command_runner.go:130] > # ]
	I1009 18:21:58.303377   34792 command_runner.go:130] > # Additional environment variables to set for all the
	I1009 18:21:58.303389   34792 command_runner.go:130] > # containers. These are overridden if set in the
	I1009 18:21:58.303398   34792 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1009 18:21:58.303404   34792 command_runner.go:130] > # default_env = [
	I1009 18:21:58.303408   34792 command_runner.go:130] > # ]
	I1009 18:21:58.303417   34792 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1009 18:21:58.303434   34792 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1009 18:21:58.303443   34792 command_runner.go:130] > # selinux = false
	I1009 18:21:58.303454   34792 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1009 18:21:58.303468   34792 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1009 18:21:58.303479   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.303489   34792 command_runner.go:130] > # seccomp_profile = ""
	I1009 18:21:58.303500   34792 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1009 18:21:58.303513   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.303520   34792 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1009 18:21:58.303530   34792 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1009 18:21:58.303543   34792 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1009 18:21:58.303553   34792 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1009 18:21:58.303567   34792 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1009 18:21:58.303578   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.303586   34792 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1009 18:21:58.303597   34792 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1009 18:21:58.303603   34792 command_runner.go:130] > # the cgroup blockio controller.
	I1009 18:21:58.303610   34792 command_runner.go:130] > # blockio_config_file = ""
	I1009 18:21:58.303625   34792 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1009 18:21:58.303631   34792 command_runner.go:130] > # blockio parameters.
	I1009 18:21:58.303639   34792 command_runner.go:130] > # blockio_reload = false
	I1009 18:21:58.303649   34792 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1009 18:21:58.303659   34792 command_runner.go:130] > # irqbalance daemon.
	I1009 18:21:58.303667   34792 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1009 18:21:58.303718   34792 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1009 18:21:58.303738   34792 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1009 18:21:58.303748   34792 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1009 18:21:58.303756   34792 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1009 18:21:58.303765   34792 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1009 18:21:58.303772   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.303777   34792 command_runner.go:130] > # rdt_config_file = ""
	I1009 18:21:58.303787   34792 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1009 18:21:58.303793   34792 command_runner.go:130] > # cgroup_manager = "systemd"
	I1009 18:21:58.303802   34792 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1009 18:21:58.303809   34792 command_runner.go:130] > # separate_pull_cgroup = ""
	I1009 18:21:58.303817   34792 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1009 18:21:58.303827   34792 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1009 18:21:58.303836   34792 command_runner.go:130] > # will be added.
	I1009 18:21:58.303844   34792 command_runner.go:130] > # default_capabilities = [
	I1009 18:21:58.303853   34792 command_runner.go:130] > # 	"CHOWN",
	I1009 18:21:58.303860   34792 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1009 18:21:58.303868   34792 command_runner.go:130] > # 	"FSETID",
	I1009 18:21:58.303874   34792 command_runner.go:130] > # 	"FOWNER",
	I1009 18:21:58.303883   34792 command_runner.go:130] > # 	"SETGID",
	I1009 18:21:58.303899   34792 command_runner.go:130] > # 	"SETUID",
	I1009 18:21:58.303908   34792 command_runner.go:130] > # 	"SETPCAP",
	I1009 18:21:58.303916   34792 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1009 18:21:58.303925   34792 command_runner.go:130] > # 	"KILL",
	I1009 18:21:58.303931   34792 command_runner.go:130] > # ]
	I1009 18:21:58.303944   34792 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1009 18:21:58.303958   34792 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1009 18:21:58.303969   34792 command_runner.go:130] > # add_inheritable_capabilities = false
	I1009 18:21:58.303982   34792 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1009 18:21:58.304001   34792 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1009 18:21:58.304011   34792 command_runner.go:130] > default_sysctls = [
	I1009 18:21:58.304018   34792 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1009 18:21:58.304025   34792 command_runner.go:130] > ]
	I1009 18:21:58.304033   34792 command_runner.go:130] > # List of devices on the host that a
	I1009 18:21:58.304046   34792 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1009 18:21:58.304055   34792 command_runner.go:130] > # allowed_devices = [
	I1009 18:21:58.304063   34792 command_runner.go:130] > # 	"/dev/fuse",
	I1009 18:21:58.304071   34792 command_runner.go:130] > # 	"/dev/net/tun",
	I1009 18:21:58.304077   34792 command_runner.go:130] > # ]
	I1009 18:21:58.304088   34792 command_runner.go:130] > # List of additional devices. specified as
	I1009 18:21:58.304102   34792 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1009 18:21:58.304113   34792 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1009 18:21:58.304124   34792 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1009 18:21:58.304153   34792 command_runner.go:130] > # additional_devices = [
	I1009 18:21:58.304163   34792 command_runner.go:130] > # ]
	I1009 18:21:58.304172   34792 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1009 18:21:58.304182   34792 command_runner.go:130] > # cdi_spec_dirs = [
	I1009 18:21:58.304188   34792 command_runner.go:130] > # 	"/etc/cdi",
	I1009 18:21:58.304197   34792 command_runner.go:130] > # 	"/var/run/cdi",
	I1009 18:21:58.304202   34792 command_runner.go:130] > # ]
	I1009 18:21:58.304212   34792 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1009 18:21:58.304225   34792 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1009 18:21:58.304234   34792 command_runner.go:130] > # Defaults to false.
	I1009 18:21:58.304243   34792 command_runner.go:130] > # device_ownership_from_security_context = false
	I1009 18:21:58.304257   34792 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1009 18:21:58.304269   34792 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1009 18:21:58.304278   34792 command_runner.go:130] > # hooks_dir = [
	I1009 18:21:58.304287   34792 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1009 18:21:58.304294   34792 command_runner.go:130] > # ]
	I1009 18:21:58.304304   34792 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1009 18:21:58.304317   34792 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1009 18:21:58.304329   34792 command_runner.go:130] > # its default mounts from the following two files:
	I1009 18:21:58.304337   34792 command_runner.go:130] > #
	I1009 18:21:58.304347   34792 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1009 18:21:58.304361   34792 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1009 18:21:58.304382   34792 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1009 18:21:58.304389   34792 command_runner.go:130] > #
	I1009 18:21:58.304399   34792 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1009 18:21:58.304413   34792 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1009 18:21:58.304427   34792 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1009 18:21:58.304438   34792 command_runner.go:130] > #      only add mounts it finds in this file.
	I1009 18:21:58.304447   34792 command_runner.go:130] > #
	I1009 18:21:58.304455   34792 command_runner.go:130] > # default_mounts_file = ""
	I1009 18:21:58.304466   34792 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1009 18:21:58.304479   34792 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1009 18:21:58.304494   34792 command_runner.go:130] > # pids_limit = -1
	I1009 18:21:58.304508   34792 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1009 18:21:58.304521   34792 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1009 18:21:58.304532   34792 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1009 18:21:58.304547   34792 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1009 18:21:58.304557   34792 command_runner.go:130] > # log_size_max = -1
	I1009 18:21:58.304569   34792 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1009 18:21:58.304578   34792 command_runner.go:130] > # log_to_journald = false
	I1009 18:21:58.304601   34792 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1009 18:21:58.304614   34792 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1009 18:21:58.304622   34792 command_runner.go:130] > # Path to directory for container attach sockets.
	I1009 18:21:58.304634   34792 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1009 18:21:58.304647   34792 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1009 18:21:58.304657   34792 command_runner.go:130] > # bind_mount_prefix = ""
	I1009 18:21:58.304669   34792 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1009 18:21:58.304677   34792 command_runner.go:130] > # read_only = false
	I1009 18:21:58.304688   34792 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1009 18:21:58.304700   34792 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1009 18:21:58.304708   34792 command_runner.go:130] > # live configuration reload.
	I1009 18:21:58.304716   34792 command_runner.go:130] > # log_level = "info"
	I1009 18:21:58.304726   34792 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1009 18:21:58.304737   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.304746   34792 command_runner.go:130] > # log_filter = ""
	I1009 18:21:58.304761   34792 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1009 18:21:58.304773   34792 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1009 18:21:58.304781   34792 command_runner.go:130] > # separated by comma.
	I1009 18:21:58.304795   34792 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 18:21:58.304805   34792 command_runner.go:130] > # uid_mappings = ""
	I1009 18:21:58.304815   34792 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1009 18:21:58.304827   34792 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1009 18:21:58.304837   34792 command_runner.go:130] > # separated by comma.
	I1009 18:21:58.304849   34792 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 18:21:58.304863   34792 command_runner.go:130] > # gid_mappings = ""
	I1009 18:21:58.304890   34792 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1009 18:21:58.304904   34792 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1009 18:21:58.304916   34792 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1009 18:21:58.304929   34792 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 18:21:58.304939   34792 command_runner.go:130] > # minimum_mappable_uid = -1
	I1009 18:21:58.304949   34792 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1009 18:21:58.304961   34792 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1009 18:21:58.304971   34792 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1009 18:21:58.304986   34792 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 18:21:58.305032   34792 command_runner.go:130] > # minimum_mappable_gid = -1
	I1009 18:21:58.305045   34792 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1009 18:21:58.305054   34792 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1009 18:21:58.305063   34792 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1009 18:21:58.305074   34792 command_runner.go:130] > # ctr_stop_timeout = 30
	I1009 18:21:58.305084   34792 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1009 18:21:58.305097   34792 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1009 18:21:58.305106   34792 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1009 18:21:58.305116   34792 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1009 18:21:58.305124   34792 command_runner.go:130] > # drop_infra_ctr = true
	I1009 18:21:58.305148   34792 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1009 18:21:58.305162   34792 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1009 18:21:58.305177   34792 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1009 18:21:58.305185   34792 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1009 18:21:58.305197   34792 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1009 18:21:58.305209   34792 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1009 18:21:58.305222   34792 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1009 18:21:58.305233   34792 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1009 18:21:58.305241   34792 command_runner.go:130] > # shared_cpuset = ""
	I1009 18:21:58.305251   34792 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1009 18:21:58.305262   34792 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1009 18:21:58.305270   34792 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1009 18:21:58.305284   34792 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1009 18:21:58.305293   34792 command_runner.go:130] > # pinns_path = ""
	I1009 18:21:58.305305   34792 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1009 18:21:58.305318   34792 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1009 18:21:58.305328   34792 command_runner.go:130] > # enable_criu_support = true
	I1009 18:21:58.305337   34792 command_runner.go:130] > # Enable/disable the generation of the container,
	I1009 18:21:58.305350   34792 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1009 18:21:58.305359   34792 command_runner.go:130] > # enable_pod_events = false
	I1009 18:21:58.305371   34792 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1009 18:21:58.305382   34792 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1009 18:21:58.305389   34792 command_runner.go:130] > # default_runtime = "crun"
	I1009 18:21:58.305401   34792 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1009 18:21:58.305415   34792 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1009 18:21:58.305432   34792 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1009 18:21:58.305444   34792 command_runner.go:130] > # creation as a file is not desired either.
	I1009 18:21:58.305460   34792 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1009 18:21:58.305471   34792 command_runner.go:130] > # the hostname is being managed dynamically.
	I1009 18:21:58.305480   34792 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1009 18:21:58.305488   34792 command_runner.go:130] > # ]
	I1009 18:21:58.305499   34792 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1009 18:21:58.305512   34792 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1009 18:21:58.305524   34792 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1009 18:21:58.305535   34792 command_runner.go:130] > # Each entry in the table should follow the format:
	I1009 18:21:58.305542   34792 command_runner.go:130] > #
	I1009 18:21:58.305551   34792 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1009 18:21:58.305561   34792 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1009 18:21:58.305570   34792 command_runner.go:130] > # runtime_type = "oci"
	I1009 18:21:58.305582   34792 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1009 18:21:58.305590   34792 command_runner.go:130] > # inherit_default_runtime = false
	I1009 18:21:58.305601   34792 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1009 18:21:58.305611   34792 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1009 18:21:58.305619   34792 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1009 18:21:58.305628   34792 command_runner.go:130] > # monitor_env = []
	I1009 18:21:58.305638   34792 command_runner.go:130] > # privileged_without_host_devices = false
	I1009 18:21:58.305647   34792 command_runner.go:130] > # allowed_annotations = []
	I1009 18:21:58.305665   34792 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1009 18:21:58.305674   34792 command_runner.go:130] > # no_sync_log = false
	I1009 18:21:58.305681   34792 command_runner.go:130] > # default_annotations = {}
	I1009 18:21:58.305690   34792 command_runner.go:130] > # stream_websockets = false
	I1009 18:21:58.305697   34792 command_runner.go:130] > # seccomp_profile = ""
	I1009 18:21:58.305730   34792 command_runner.go:130] > # Where:
	I1009 18:21:58.305743   34792 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1009 18:21:58.305756   34792 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1009 18:21:58.305769   34792 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1009 18:21:58.305779   34792 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1009 18:21:58.305788   34792 command_runner.go:130] > #   in $PATH.
	I1009 18:21:58.305800   34792 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1009 18:21:58.305811   34792 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1009 18:21:58.305823   34792 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1009 18:21:58.305832   34792 command_runner.go:130] > #   state.
	I1009 18:21:58.305842   34792 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1009 18:21:58.305854   34792 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1009 18:21:58.305865   34792 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1009 18:21:58.305877   34792 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1009 18:21:58.305888   34792 command_runner.go:130] > #   the values from the default runtime on load time.
	I1009 18:21:58.305902   34792 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1009 18:21:58.305914   34792 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1009 18:21:58.305928   34792 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1009 18:21:58.305940   34792 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1009 18:21:58.305948   34792 command_runner.go:130] > #   The currently recognized values are:
	I1009 18:21:58.305962   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1009 18:21:58.305977   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1009 18:21:58.305989   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1009 18:21:58.306007   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1009 18:21:58.306022   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1009 18:21:58.306036   34792 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1009 18:21:58.306050   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1009 18:21:58.306061   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1009 18:21:58.306082   34792 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1009 18:21:58.306095   34792 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1009 18:21:58.306109   34792 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1009 18:21:58.306121   34792 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1009 18:21:58.306132   34792 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1009 18:21:58.306154   34792 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1009 18:21:58.306166   34792 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1009 18:21:58.306181   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1009 18:21:58.306194   34792 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1009 18:21:58.306204   34792 command_runner.go:130] > #   deprecated option "conmon".
	I1009 18:21:58.306216   34792 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1009 18:21:58.306226   34792 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1009 18:21:58.306240   34792 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1009 18:21:58.306250   34792 command_runner.go:130] > #   should be moved to the container's cgroup
	I1009 18:21:58.306260   34792 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1009 18:21:58.306271   34792 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1009 18:21:58.306285   34792 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1009 18:21:58.306294   34792 command_runner.go:130] > #   conmon-rs by using:
	I1009 18:21:58.306306   34792 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1009 18:21:58.306321   34792 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1009 18:21:58.306336   34792 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1009 18:21:58.306350   34792 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1009 18:21:58.306363   34792 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1009 18:21:58.306378   34792 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1009 18:21:58.306392   34792 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1009 18:21:58.306402   34792 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1009 18:21:58.306417   34792 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1009 18:21:58.306431   34792 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1009 18:21:58.306441   34792 command_runner.go:130] > #   when a machine crash happens.
	I1009 18:21:58.306452   34792 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1009 18:21:58.306467   34792 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1009 18:21:58.306481   34792 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1009 18:21:58.306492   34792 command_runner.go:130] > #   seccomp profile for the runtime.
	I1009 18:21:58.306506   34792 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1009 18:21:58.306520   34792 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1009 18:21:58.306525   34792 command_runner.go:130] > #
	I1009 18:21:58.306534   34792 command_runner.go:130] > # Using the seccomp notifier feature:
	I1009 18:21:58.306542   34792 command_runner.go:130] > #
	I1009 18:21:58.306552   34792 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1009 18:21:58.306565   34792 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1009 18:21:58.306574   34792 command_runner.go:130] > #
	I1009 18:21:58.306584   34792 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1009 18:21:58.306597   34792 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1009 18:21:58.306605   34792 command_runner.go:130] > #
	I1009 18:21:58.306615   34792 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1009 18:21:58.306623   34792 command_runner.go:130] > # feature.
	I1009 18:21:58.306629   34792 command_runner.go:130] > #
	I1009 18:21:58.306641   34792 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1009 18:21:58.306654   34792 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1009 18:21:58.306667   34792 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1009 18:21:58.306680   34792 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1009 18:21:58.306692   34792 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1009 18:21:58.306700   34792 command_runner.go:130] > #
	I1009 18:21:58.306710   34792 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1009 18:21:58.306723   34792 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1009 18:21:58.306730   34792 command_runner.go:130] > #
	I1009 18:21:58.306740   34792 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1009 18:21:58.306752   34792 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1009 18:21:58.306760   34792 command_runner.go:130] > #
	I1009 18:21:58.306770   34792 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1009 18:21:58.306782   34792 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1009 18:21:58.306788   34792 command_runner.go:130] > # limitation.
	I1009 18:21:58.306798   34792 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1009 18:21:58.306809   34792 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1009 18:21:58.306818   34792 command_runner.go:130] > runtime_type = ""
	I1009 18:21:58.306825   34792 command_runner.go:130] > runtime_root = "/run/crun"
	I1009 18:21:58.306837   34792 command_runner.go:130] > inherit_default_runtime = false
	I1009 18:21:58.306847   34792 command_runner.go:130] > runtime_config_path = ""
	I1009 18:21:58.306853   34792 command_runner.go:130] > container_min_memory = ""
	I1009 18:21:58.306863   34792 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1009 18:21:58.306870   34792 command_runner.go:130] > monitor_cgroup = "pod"
	I1009 18:21:58.306879   34792 command_runner.go:130] > monitor_exec_cgroup = ""
	I1009 18:21:58.306888   34792 command_runner.go:130] > allowed_annotations = [
	I1009 18:21:58.306898   34792 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1009 18:21:58.306904   34792 command_runner.go:130] > ]
	I1009 18:21:58.306914   34792 command_runner.go:130] > privileged_without_host_devices = false
	I1009 18:21:58.306921   34792 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1009 18:21:58.306931   34792 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1009 18:21:58.306937   34792 command_runner.go:130] > runtime_type = ""
	I1009 18:21:58.306944   34792 command_runner.go:130] > runtime_root = "/run/runc"
	I1009 18:21:58.306952   34792 command_runner.go:130] > inherit_default_runtime = false
	I1009 18:21:58.306962   34792 command_runner.go:130] > runtime_config_path = ""
	I1009 18:21:58.306970   34792 command_runner.go:130] > container_min_memory = ""
	I1009 18:21:58.306980   34792 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1009 18:21:58.306989   34792 command_runner.go:130] > monitor_cgroup = "pod"
	I1009 18:21:58.307006   34792 command_runner.go:130] > monitor_exec_cgroup = ""
	I1009 18:21:58.307017   34792 command_runner.go:130] > privileged_without_host_devices = false
	I1009 18:21:58.307031   34792 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1009 18:21:58.307040   34792 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1009 18:21:58.307053   34792 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1009 18:21:58.307068   34792 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1009 18:21:58.307088   34792 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1009 18:21:58.307107   34792 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1009 18:21:58.307121   34792 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1009 18:21:58.307130   34792 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1009 18:21:58.307160   34792 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1009 18:21:58.307179   34792 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1009 18:21:58.307192   34792 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1009 18:21:58.307206   34792 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1009 18:21:58.307215   34792 command_runner.go:130] > # Example:
	I1009 18:21:58.307224   34792 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1009 18:21:58.307234   34792 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1009 18:21:58.307244   34792 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1009 18:21:58.307253   34792 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1009 18:21:58.307262   34792 command_runner.go:130] > # cpuset = "0-1"
	I1009 18:21:58.307269   34792 command_runner.go:130] > # cpushares = "5"
	I1009 18:21:58.307278   34792 command_runner.go:130] > # cpuquota = "1000"
	I1009 18:21:58.307285   34792 command_runner.go:130] > # cpuperiod = "100000"
	I1009 18:21:58.307294   34792 command_runner.go:130] > # cpulimit = "35"
	I1009 18:21:58.307301   34792 command_runner.go:130] > # Where:
	I1009 18:21:58.307309   34792 command_runner.go:130] > # The workload name is workload-type.
	I1009 18:21:58.307323   34792 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1009 18:21:58.307336   34792 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1009 18:21:58.307349   34792 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1009 18:21:58.307365   34792 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1009 18:21:58.307377   34792 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1009 18:21:58.307388   34792 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1009 18:21:58.307399   34792 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1009 18:21:58.307410   34792 command_runner.go:130] > # Default value is set to true
	I1009 18:21:58.307418   34792 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1009 18:21:58.307430   34792 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1009 18:21:58.307440   34792 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1009 18:21:58.307449   34792 command_runner.go:130] > # Default value is set to 'false'
	I1009 18:21:58.307462   34792 command_runner.go:130] > # disable_hostport_mapping = false
	I1009 18:21:58.307474   34792 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1009 18:21:58.307487   34792 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1009 18:21:58.307495   34792 command_runner.go:130] > # timezone = ""
	I1009 18:21:58.307506   34792 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1009 18:21:58.307513   34792 command_runner.go:130] > #
	I1009 18:21:58.307523   34792 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1009 18:21:58.307536   34792 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1009 18:21:58.307544   34792 command_runner.go:130] > [crio.image]
	I1009 18:21:58.307556   34792 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1009 18:21:58.307566   34792 command_runner.go:130] > # default_transport = "docker://"
	I1009 18:21:58.307578   34792 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1009 18:21:58.307591   34792 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1009 18:21:58.307600   34792 command_runner.go:130] > # global_auth_file = ""
	I1009 18:21:58.307608   34792 command_runner.go:130] > # The image used to instantiate infra containers.
	I1009 18:21:58.307620   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.307630   34792 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1009 18:21:58.307641   34792 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1009 18:21:58.307654   34792 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1009 18:21:58.307665   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.307675   34792 command_runner.go:130] > # pause_image_auth_file = ""
	I1009 18:21:58.307686   34792 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1009 18:21:58.307698   34792 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1009 18:21:58.307708   34792 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1009 18:21:58.307719   34792 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1009 18:21:58.307727   34792 command_runner.go:130] > # pause_command = "/pause"
	I1009 18:21:58.307740   34792 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1009 18:21:58.307753   34792 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1009 18:21:58.307765   34792 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1009 18:21:58.307777   34792 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1009 18:21:58.307789   34792 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1009 18:21:58.307802   34792 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1009 18:21:58.307811   34792 command_runner.go:130] > # pinned_images = [
	I1009 18:21:58.307819   34792 command_runner.go:130] > # ]
	I1009 18:21:58.307830   34792 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1009 18:21:58.307842   34792 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1009 18:21:58.307855   34792 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1009 18:21:58.307868   34792 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1009 18:21:58.307879   34792 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1009 18:21:58.307887   34792 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1009 18:21:58.307899   34792 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1009 18:21:58.307912   34792 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1009 18:21:58.307930   34792 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1009 18:21:58.307943   34792 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1009 18:21:58.307955   34792 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1009 18:21:58.307971   34792 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1009 18:21:58.307982   34792 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1009 18:21:58.308001   34792 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1009 18:21:58.308010   34792 command_runner.go:130] > # changing them here.
	I1009 18:21:58.308020   34792 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1009 18:21:58.308029   34792 command_runner.go:130] > # insecure_registries = [
	I1009 18:21:58.308035   34792 command_runner.go:130] > # ]
	I1009 18:21:58.308049   34792 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1009 18:21:58.308059   34792 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1009 18:21:58.308067   34792 command_runner.go:130] > # image_volumes = "mkdir"
	I1009 18:21:58.308079   34792 command_runner.go:130] > # Temporary directory to use for storing big files
	I1009 18:21:58.308089   34792 command_runner.go:130] > # big_files_temporary_dir = ""
	I1009 18:21:58.308100   34792 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1009 18:21:58.308114   34792 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1009 18:21:58.308123   34792 command_runner.go:130] > # auto_reload_registries = false
	I1009 18:21:58.308133   34792 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1009 18:21:58.308163   34792 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1009 18:21:58.308174   34792 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1009 18:21:58.308183   34792 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1009 18:21:58.308191   34792 command_runner.go:130] > # The mode of short name resolution.
	I1009 18:21:58.308205   34792 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1009 18:21:58.308219   34792 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1009 18:21:58.308230   34792 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1009 18:21:58.308238   34792 command_runner.go:130] > # short_name_mode = "enforcing"
	I1009 18:21:58.308250   34792 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1009 18:21:58.308261   34792 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1009 18:21:58.308271   34792 command_runner.go:130] > # oci_artifact_mount_support = true
	I1009 18:21:58.308282   34792 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1009 18:21:58.308291   34792 command_runner.go:130] > # CNI plugins.
	I1009 18:21:58.308297   34792 command_runner.go:130] > [crio.network]
	I1009 18:21:58.308312   34792 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1009 18:21:58.308324   34792 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1009 18:21:58.308334   34792 command_runner.go:130] > # cni_default_network = ""
	I1009 18:21:58.308345   34792 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1009 18:21:58.308355   34792 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1009 18:21:58.308365   34792 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1009 18:21:58.308373   34792 command_runner.go:130] > # plugin_dirs = [
	I1009 18:21:58.308380   34792 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1009 18:21:58.308388   34792 command_runner.go:130] > # ]
	I1009 18:21:58.308395   34792 command_runner.go:130] > # List of included pod metrics.
	I1009 18:21:58.308404   34792 command_runner.go:130] > # included_pod_metrics = [
	I1009 18:21:58.308411   34792 command_runner.go:130] > # ]
	I1009 18:21:58.308423   34792 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1009 18:21:58.308429   34792 command_runner.go:130] > [crio.metrics]
	I1009 18:21:58.308440   34792 command_runner.go:130] > # Globally enable or disable metrics support.
	I1009 18:21:58.308447   34792 command_runner.go:130] > # enable_metrics = false
	I1009 18:21:58.308457   34792 command_runner.go:130] > # Specify enabled metrics collectors.
	I1009 18:21:58.308466   34792 command_runner.go:130] > # Per default all metrics are enabled.
	I1009 18:21:58.308479   34792 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1009 18:21:58.308492   34792 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1009 18:21:58.308504   34792 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1009 18:21:58.308514   34792 command_runner.go:130] > # metrics_collectors = [
	I1009 18:21:58.308520   34792 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1009 18:21:58.308525   34792 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1009 18:21:58.308530   34792 command_runner.go:130] > # 	"containers_oom_total",
	I1009 18:21:58.308535   34792 command_runner.go:130] > # 	"processes_defunct",
	I1009 18:21:58.308540   34792 command_runner.go:130] > # 	"operations_total",
	I1009 18:21:58.308546   34792 command_runner.go:130] > # 	"operations_latency_seconds",
	I1009 18:21:58.308553   34792 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1009 18:21:58.308560   34792 command_runner.go:130] > # 	"operations_errors_total",
	I1009 18:21:58.308567   34792 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1009 18:21:58.308574   34792 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1009 18:21:58.308581   34792 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1009 18:21:58.308590   34792 command_runner.go:130] > # 	"image_pulls_success_total",
	I1009 18:21:58.308598   34792 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1009 18:21:58.308605   34792 command_runner.go:130] > # 	"containers_oom_count_total",
	I1009 18:21:58.308613   34792 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1009 18:21:58.308620   34792 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1009 18:21:58.308630   34792 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1009 18:21:58.308635   34792 command_runner.go:130] > # ]
	I1009 18:21:58.308646   34792 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1009 18:21:58.308656   34792 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1009 18:21:58.308664   34792 command_runner.go:130] > # The port on which the metrics server will listen.
	I1009 18:21:58.308673   34792 command_runner.go:130] > # metrics_port = 9090
	I1009 18:21:58.308682   34792 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1009 18:21:58.308691   34792 command_runner.go:130] > # metrics_socket = ""
	I1009 18:21:58.308699   34792 command_runner.go:130] > # The certificate for the secure metrics server.
	I1009 18:21:58.308713   34792 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1009 18:21:58.308726   34792 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1009 18:21:58.308736   34792 command_runner.go:130] > # certificate on any modification event.
	I1009 18:21:58.308743   34792 command_runner.go:130] > # metrics_cert = ""
	I1009 18:21:58.308754   34792 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1009 18:21:58.308765   34792 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1009 18:21:58.308774   34792 command_runner.go:130] > # metrics_key = ""
	I1009 18:21:58.308785   34792 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1009 18:21:58.308793   34792 command_runner.go:130] > [crio.tracing]
	I1009 18:21:58.308803   34792 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1009 18:21:58.308812   34792 command_runner.go:130] > # enable_tracing = false
	I1009 18:21:58.308821   34792 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1009 18:21:58.308831   34792 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1009 18:21:58.308842   34792 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1009 18:21:58.308854   34792 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1009 18:21:58.308864   34792 command_runner.go:130] > # CRI-O NRI configuration.
	I1009 18:21:58.308871   34792 command_runner.go:130] > [crio.nri]
	I1009 18:21:58.308879   34792 command_runner.go:130] > # Globally enable or disable NRI.
	I1009 18:21:58.308888   34792 command_runner.go:130] > # enable_nri = true
	I1009 18:21:58.308908   34792 command_runner.go:130] > # NRI socket to listen on.
	I1009 18:21:58.308919   34792 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1009 18:21:58.308926   34792 command_runner.go:130] > # NRI plugin directory to use.
	I1009 18:21:58.308934   34792 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1009 18:21:58.308945   34792 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1009 18:21:58.308955   34792 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1009 18:21:58.308967   34792 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1009 18:21:58.309020   34792 command_runner.go:130] > # nri_disable_connections = false
	I1009 18:21:58.309031   34792 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1009 18:21:58.309039   34792 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1009 18:21:58.309050   34792 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1009 18:21:58.309060   34792 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1009 18:21:58.309070   34792 command_runner.go:130] > # NRI default validator configuration.
	I1009 18:21:58.309081   34792 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1009 18:21:58.309094   34792 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1009 18:21:58.309105   34792 command_runner.go:130] > # can be restricted/rejected:
	I1009 18:21:58.309114   34792 command_runner.go:130] > # - OCI hook injection
	I1009 18:21:58.309123   34792 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1009 18:21:58.309144   34792 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1009 18:21:58.309154   34792 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1009 18:21:58.309164   34792 command_runner.go:130] > # - adjustment of linux namespaces
	I1009 18:21:58.309174   34792 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1009 18:21:58.309187   34792 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1009 18:21:58.309199   34792 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1009 18:21:58.309206   34792 command_runner.go:130] > #
	I1009 18:21:58.309213   34792 command_runner.go:130] > # [crio.nri.default_validator]
	I1009 18:21:58.309228   34792 command_runner.go:130] > # nri_enable_default_validator = false
	I1009 18:21:58.309239   34792 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1009 18:21:58.309249   34792 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1009 18:21:58.309259   34792 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1009 18:21:58.309270   34792 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1009 18:21:58.309282   34792 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1009 18:21:58.309292   34792 command_runner.go:130] > # nri_validator_required_plugins = [
	I1009 18:21:58.309300   34792 command_runner.go:130] > # ]
	I1009 18:21:58.309310   34792 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1009 18:21:58.309320   34792 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1009 18:21:58.309329   34792 command_runner.go:130] > [crio.stats]
	I1009 18:21:58.309338   34792 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1009 18:21:58.309350   34792 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1009 18:21:58.309361   34792 command_runner.go:130] > # stats_collection_period = 0
	I1009 18:21:58.309373   34792 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1009 18:21:58.309386   34792 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1009 18:21:58.309395   34792 command_runner.go:130] > # collection_period = 0
	I1009 18:21:58.309439   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.287848676Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1009 18:21:58.309455   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.287874416Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1009 18:21:58.309486   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.28789246Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1009 18:21:58.309504   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.287909281Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1009 18:21:58.309520   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.287966347Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:58.309548   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.288147535Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1009 18:21:58.309568   34792 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1009 18:21:58.309652   34792 cni.go:84] Creating CNI manager for ""
	I1009 18:21:58.309667   34792 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:21:58.309686   34792 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:21:58.309718   34792 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753440 NodeName:functional-753440 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:21:58.309867   34792 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-753440"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:21:58.309941   34792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:21:58.317943   34792 command_runner.go:130] > kubeadm
	I1009 18:21:58.317964   34792 command_runner.go:130] > kubectl
	I1009 18:21:58.317972   34792 command_runner.go:130] > kubelet
	I1009 18:21:58.317992   34792 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:21:58.318041   34792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:21:58.325700   34792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 18:21:58.338455   34792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:21:58.350701   34792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1009 18:21:58.362930   34792 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 18:21:58.366724   34792 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1009 18:21:58.366809   34792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:21:58.451602   34792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:21:58.464478   34792 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440 for IP: 192.168.49.2
	I1009 18:21:58.464503   34792 certs.go:195] generating shared ca certs ...
	I1009 18:21:58.464518   34792 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:21:58.464657   34792 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 18:21:58.464699   34792 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 18:21:58.464708   34792 certs.go:257] generating profile certs ...
	I1009 18:21:58.464789   34792 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.key
	I1009 18:21:58.464832   34792 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.key.01289d3a
	I1009 18:21:58.464870   34792 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.key
	I1009 18:21:58.464880   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 18:21:58.464891   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 18:21:58.464904   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 18:21:58.464914   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 18:21:58.464926   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 18:21:58.464938   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 18:21:58.464950   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 18:21:58.464961   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 18:21:58.465007   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 18:21:58.465033   34792 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 18:21:58.465040   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:21:58.465060   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 18:21:58.465083   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:21:58.465117   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 18:21:58.465182   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:21:58.465212   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem -> /usr/share/ca-certificates/14880.pem
	I1009 18:21:58.465226   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /usr/share/ca-certificates/148802.pem
	I1009 18:21:58.465252   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:21:58.465730   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:21:58.483386   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:21:58.500383   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:21:58.517315   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:21:58.533903   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 18:21:58.550845   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:21:58.567242   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:21:58.584667   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:21:58.601626   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 18:21:58.618749   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 18:21:58.635789   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:21:58.652270   34792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:21:58.664508   34792 ssh_runner.go:195] Run: openssl version
	I1009 18:21:58.670569   34792 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1009 18:21:58.670643   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:21:58.679189   34792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:21:58.683037   34792 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:21:58.683067   34792 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:21:58.683111   34792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:21:58.716325   34792 command_runner.go:130] > b5213941
	I1009 18:21:58.716574   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:21:58.724647   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 18:21:58.732750   34792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 18:21:58.736237   34792 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:21:58.736342   34792 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:21:58.736392   34792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 18:21:58.769488   34792 command_runner.go:130] > 51391683
	I1009 18:21:58.769675   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 18:21:58.778213   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 18:21:58.786758   34792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 18:21:58.790431   34792 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:21:58.790472   34792 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:21:58.790516   34792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 18:21:58.824579   34792 command_runner.go:130] > 3ec20f2e
	I1009 18:21:58.824670   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:21:58.832975   34792 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:21:58.836722   34792 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:21:58.836745   34792 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1009 18:21:58.836750   34792 command_runner.go:130] > Device: 8,1	Inode: 583629      Links: 1
	I1009 18:21:58.836756   34792 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 18:21:58.836762   34792 command_runner.go:130] > Access: 2025-10-09 18:17:52.024667536 +0000
	I1009 18:21:58.836766   34792 command_runner.go:130] > Modify: 2025-10-09 18:13:46.346674317 +0000
	I1009 18:21:58.836771   34792 command_runner.go:130] > Change: 2025-10-09 18:13:46.346674317 +0000
	I1009 18:21:58.836775   34792 command_runner.go:130] >  Birth: 2025-10-09 18:13:46.346674317 +0000
	I1009 18:21:58.836829   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 18:21:58.871297   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:58.871384   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 18:21:58.905951   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:58.906293   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 18:21:58.941072   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:58.941180   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 18:21:58.975637   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:58.975713   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 18:21:59.010686   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:59.010763   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 18:21:59.045288   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:59.045372   34792 kubeadm.go:400] StartCluster: {Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:21:59.045468   34792 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:21:59.045548   34792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:21:59.072734   34792 cri.go:89] found id: ""
	I1009 18:21:59.072811   34792 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:21:59.080291   34792 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1009 18:21:59.080312   34792 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1009 18:21:59.080317   34792 command_runner.go:130] > /var/lib/minikube/etcd:
	I1009 18:21:59.080960   34792 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 18:21:59.080977   34792 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 18:21:59.081028   34792 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 18:21:59.088791   34792 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:21:59.088891   34792 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-753440" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:21:59.088923   34792 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-11374/kubeconfig needs updating (will repair): [kubeconfig missing "functional-753440" cluster setting kubeconfig missing "functional-753440" context setting]
	I1009 18:21:59.089226   34792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/kubeconfig: {Name:mke7bf8fc0811179129dfd61e3a963860adf8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:21:59.115972   34792 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:21:59.116113   34792 kapi.go:59] client config for functional-753440: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 18:21:59.116551   34792 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 18:21:59.116565   34792 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 18:21:59.116570   34792 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 18:21:59.116574   34792 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 18:21:59.116578   34792 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 18:21:59.116681   34792 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 18:21:59.116939   34792 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 18:21:59.125251   34792 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 18:21:59.125284   34792 kubeadm.go:601] duration metric: took 44.302105ms to restartPrimaryControlPlane
	I1009 18:21:59.125294   34792 kubeadm.go:402] duration metric: took 79.928873ms to StartCluster
	I1009 18:21:59.125313   34792 settings.go:142] acquiring lock: {Name:mke1fc24bd3c282bdce5b5999d4611ed242ead0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:21:59.125417   34792 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:21:59.125977   34792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/kubeconfig: {Name:mke7bf8fc0811179129dfd61e3a963860adf8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:21:59.126266   34792 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:21:59.126330   34792 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 18:21:59.126472   34792 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:21:59.126485   34792 addons.go:69] Setting default-storageclass=true in profile "functional-753440"
	I1009 18:21:59.126503   34792 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-753440"
	I1009 18:21:59.126475   34792 addons.go:69] Setting storage-provisioner=true in profile "functional-753440"
	I1009 18:21:59.126533   34792 addons.go:238] Setting addon storage-provisioner=true in "functional-753440"
	I1009 18:21:59.126575   34792 host.go:66] Checking if "functional-753440" exists ...
	I1009 18:21:59.126787   34792 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
	I1009 18:21:59.126953   34792 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
	I1009 18:21:59.129433   34792 out.go:179] * Verifying Kubernetes components...
	I1009 18:21:59.130694   34792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:21:59.147348   34792 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:21:59.147489   34792 kapi.go:59] client config for functional-753440: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 18:21:59.147681   34792 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 18:21:59.147763   34792 addons.go:238] Setting addon default-storageclass=true in "functional-753440"
	I1009 18:21:59.147799   34792 host.go:66] Checking if "functional-753440" exists ...
	I1009 18:21:59.148103   34792 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
	I1009 18:21:59.149131   34792 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:21:59.149169   34792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 18:21:59.149223   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:59.172020   34792 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 18:21:59.172047   34792 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 18:21:59.172108   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:59.172953   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:59.190936   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:59.227445   34792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:21:59.240811   34792 node_ready.go:35] waiting up to 6m0s for node "functional-753440" to be "Ready" ...
	I1009 18:21:59.240954   34792 type.go:168] "Request Body" body=""
	I1009 18:21:59.241028   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:21:59.241430   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:21:59.284375   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:21:59.300190   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:21:59.338559   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.338609   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.338653   34792 retry.go:31] will retry after 183.514108ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.353053   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.353121   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.353157   34792 retry.go:31] will retry after 252.751171ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.522422   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:21:59.573424   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.575988   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.576058   34792 retry.go:31] will retry after 293.779687ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.606194   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:21:59.660438   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.660484   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.660501   34792 retry.go:31] will retry after 279.387954ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.741722   34792 type.go:168] "Request Body" body=""
	I1009 18:21:59.741829   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:21:59.742206   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:21:59.870497   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:21:59.921333   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.923563   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.923589   34792 retry.go:31] will retry after 737.997993ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.940822   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:21:59.989898   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.992209   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.992239   34792 retry.go:31] will retry after 533.533276ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:00.241740   34792 type.go:168] "Request Body" body=""
	I1009 18:22:00.241807   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:00.242177   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:00.526746   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:00.575738   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:00.578103   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:00.578131   34792 retry.go:31] will retry after 930.387704ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:00.662455   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:00.715389   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:00.715427   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:00.715452   34792 retry.go:31] will retry after 867.874306ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:00.741572   34792 type.go:168] "Request Body" body=""
	I1009 18:22:00.741637   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:00.741979   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:01.241687   34792 type.go:168] "Request Body" body=""
	I1009 18:22:01.241751   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:01.242091   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:01.242159   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:01.509541   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:01.558188   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:01.560577   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:01.560605   34792 retry.go:31] will retry after 1.199996419s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:01.583824   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:01.634758   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:01.634811   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:01.634834   34792 retry.go:31] will retry after 674.661756ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:01.741022   34792 type.go:168] "Request Body" body=""
	I1009 18:22:01.741106   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:01.741428   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:02.241242   34792 type.go:168] "Request Body" body=""
	I1009 18:22:02.241329   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:02.241689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:02.309923   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:02.359167   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:02.361481   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:02.361513   34792 retry.go:31] will retry after 1.255051156s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:02.741014   34792 type.go:168] "Request Body" body=""
	I1009 18:22:02.741086   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:02.741469   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:02.761694   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:02.809418   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:02.811709   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:02.811735   34792 retry.go:31] will retry after 2.010356843s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:03.241312   34792 type.go:168] "Request Body" body=""
	I1009 18:22:03.241377   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:03.241665   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:03.617237   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:03.670575   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:03.670619   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:03.670643   34792 retry.go:31] will retry after 3.029315393s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:03.741894   34792 type.go:168] "Request Body" body=""
	I1009 18:22:03.741959   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:03.742307   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:03.742368   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:04.241167   34792 type.go:168] "Request Body" body=""
	I1009 18:22:04.241255   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:04.241616   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:04.741405   34792 type.go:168] "Request Body" body=""
	I1009 18:22:04.741470   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:04.741793   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:04.823125   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:04.874252   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:04.876942   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:04.876977   34792 retry.go:31] will retry after 2.337146666s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:05.241523   34792 type.go:168] "Request Body" body=""
	I1009 18:22:05.241603   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:05.241925   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:05.741876   34792 type.go:168] "Request Body" body=""
	I1009 18:22:05.741944   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:05.742306   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:06.241056   34792 type.go:168] "Request Body" body=""
	I1009 18:22:06.241120   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:06.241524   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:06.241591   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:06.701185   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:06.741960   34792 type.go:168] "Request Body" body=""
	I1009 18:22:06.742030   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:06.742348   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:06.753588   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:06.753625   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:06.753645   34792 retry.go:31] will retry after 5.067292314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:07.214286   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:07.241989   34792 type.go:168] "Request Body" body=""
	I1009 18:22:07.242085   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:07.242465   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:07.267576   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:07.267619   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:07.267638   34792 retry.go:31] will retry after 3.639407023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:07.741211   34792 type.go:168] "Request Body" body=""
	I1009 18:22:07.741279   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:07.741611   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:08.241376   34792 type.go:168] "Request Body" body=""
	I1009 18:22:08.241468   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:08.241797   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:08.241859   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:08.741654   34792 type.go:168] "Request Body" body=""
	I1009 18:22:08.741723   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:08.742130   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:09.241911   34792 type.go:168] "Request Body" body=""
	I1009 18:22:09.241978   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:09.242356   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:09.742012   34792 type.go:168] "Request Body" body=""
	I1009 18:22:09.742100   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:09.742487   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:10.241171   34792 type.go:168] "Request Body" body=""
	I1009 18:22:10.241238   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:10.241608   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:10.741552   34792 type.go:168] "Request Body" body=""
	I1009 18:22:10.741634   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:10.741987   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:10.742077   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:10.907343   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:10.958356   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:10.960749   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:10.960774   34792 retry.go:31] will retry after 7.184910667s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:11.241202   34792 type.go:168] "Request Body" body=""
	I1009 18:22:11.241304   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:11.241646   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:11.741253   34792 type.go:168] "Request Body" body=""
	I1009 18:22:11.741393   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:11.741703   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:11.821955   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:11.870785   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:11.873227   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:11.873260   34792 retry.go:31] will retry after 9.534535371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:12.241850   34792 type.go:168] "Request Body" body=""
	I1009 18:22:12.241915   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:12.242244   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:12.741040   34792 type.go:168] "Request Body" body=""
	I1009 18:22:12.741121   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:12.741476   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:13.241242   34792 type.go:168] "Request Body" body=""
	I1009 18:22:13.241344   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:13.241681   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:13.241752   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:13.741448   34792 type.go:168] "Request Body" body=""
	I1009 18:22:13.741557   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:13.741881   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:14.241703   34792 type.go:168] "Request Body" body=""
	I1009 18:22:14.241767   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:14.242071   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:14.741971   34792 type.go:168] "Request Body" body=""
	I1009 18:22:14.742058   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:14.742415   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:15.241162   34792 type.go:168] "Request Body" body=""
	I1009 18:22:15.241227   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:15.241543   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:15.741329   34792 type.go:168] "Request Body" body=""
	I1009 18:22:15.741396   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:15.741713   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:15.741779   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:16.241461   34792 type.go:168] "Request Body" body=""
	I1009 18:22:16.241527   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:16.241841   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:16.741694   34792 type.go:168] "Request Body" body=""
	I1009 18:22:16.741756   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:16.742072   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:17.241938   34792 type.go:168] "Request Body" body=""
	I1009 18:22:17.242012   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:17.242354   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:17.741119   34792 type.go:168] "Request Body" body=""
	I1009 18:22:17.741209   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:17.741520   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:18.146014   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:18.197672   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:18.200076   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:18.200108   34792 retry.go:31] will retry after 13.416592948s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:18.241338   34792 type.go:168] "Request Body" body=""
	I1009 18:22:18.241421   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:18.241742   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:18.241815   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:18.741635   34792 type.go:168] "Request Body" body=""
	I1009 18:22:18.741716   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:18.742048   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:19.241915   34792 type.go:168] "Request Body" body=""
	I1009 18:22:19.241986   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:19.242351   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:19.741113   34792 type.go:168] "Request Body" body=""
	I1009 18:22:19.741223   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:19.741558   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:20.241266   34792 type.go:168] "Request Body" body=""
	I1009 18:22:20.241372   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:20.241689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:20.741538   34792 type.go:168] "Request Body" body=""
	I1009 18:22:20.741648   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:20.742078   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:20.742168   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:21.241982   34792 type.go:168] "Request Body" body=""
	I1009 18:22:21.242072   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:21.242428   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:21.408800   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:21.460386   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:21.460443   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:21.460465   34792 retry.go:31] will retry after 6.196258431s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:21.741894   34792 type.go:168] "Request Body" body=""
	I1009 18:22:21.741973   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:21.742340   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:22.241109   34792 type.go:168] "Request Body" body=""
	I1009 18:22:22.241216   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:22.241540   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:22.741267   34792 type.go:168] "Request Body" body=""
	I1009 18:22:22.741362   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:22.741668   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:23.241400   34792 type.go:168] "Request Body" body=""
	I1009 18:22:23.241466   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:23.241777   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:23.241839   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:23.741636   34792 type.go:168] "Request Body" body=""
	I1009 18:22:23.741720   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:23.742032   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:24.241849   34792 type.go:168] "Request Body" body=""
	I1009 18:22:24.241912   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:24.242229   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:24.740969   34792 type.go:168] "Request Body" body=""
	I1009 18:22:24.741034   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:24.741359   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:25.241097   34792 type.go:168] "Request Body" body=""
	I1009 18:22:25.241186   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:25.241506   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:25.741317   34792 type.go:168] "Request Body" body=""
	I1009 18:22:25.741384   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:25.741717   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:25.741785   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:26.241467   34792 type.go:168] "Request Body" body=""
	I1009 18:22:26.241530   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:26.241836   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:26.741641   34792 type.go:168] "Request Body" body=""
	I1009 18:22:26.741717   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:26.742054   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:27.241867   34792 type.go:168] "Request Body" body=""
	I1009 18:22:27.241935   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:27.242289   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:27.657912   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:27.709732   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:27.709776   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:27.709796   34792 retry.go:31] will retry after 21.104663041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:27.741976   34792 type.go:168] "Request Body" body=""
	I1009 18:22:27.742060   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:27.742387   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:27.742447   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:28.241206   34792 type.go:168] "Request Body" body=""
	I1009 18:22:28.241272   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:28.241641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:28.741374   34792 type.go:168] "Request Body" body=""
	I1009 18:22:28.741445   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:28.741741   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:29.241532   34792 type.go:168] "Request Body" body=""
	I1009 18:22:29.241600   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:29.241930   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:29.741720   34792 type.go:168] "Request Body" body=""
	I1009 18:22:29.741782   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:29.742115   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:30.241968   34792 type.go:168] "Request Body" body=""
	I1009 18:22:30.242038   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:30.242354   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:30.242406   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:30.741168   34792 type.go:168] "Request Body" body=""
	I1009 18:22:30.741235   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:30.741522   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:31.241253   34792 type.go:168] "Request Body" body=""
	I1009 18:22:31.241332   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:31.241693   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:31.617269   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:31.669784   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:31.669834   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:31.669851   34792 retry.go:31] will retry after 15.154475243s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:31.740998   34792 type.go:168] "Request Body" body=""
	I1009 18:22:31.741063   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:31.741420   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:32.241118   34792 type.go:168] "Request Body" body=""
	I1009 18:22:32.241207   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:32.241526   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:32.741162   34792 type.go:168] "Request Body" body=""
	I1009 18:22:32.741230   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:32.741578   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:32.741636   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:33.241206   34792 type.go:168] "Request Body" body=""
	I1009 18:22:33.241273   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:33.241600   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:33.741209   34792 type.go:168] "Request Body" body=""
	I1009 18:22:33.741274   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:33.741593   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:34.241252   34792 type.go:168] "Request Body" body=""
	I1009 18:22:34.241319   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:34.241629   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:34.741297   34792 type.go:168] "Request Body" body=""
	I1009 18:22:34.741366   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:34.741662   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:34.741714   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:35.241258   34792 type.go:168] "Request Body" body=""
	I1009 18:22:35.241319   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:35.241631   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:35.741518   34792 type.go:168] "Request Body" body=""
	I1009 18:22:35.741590   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:35.741908   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:36.241473   34792 type.go:168] "Request Body" body=""
	I1009 18:22:36.241537   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:36.241867   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:36.741507   34792 type.go:168] "Request Body" body=""
	I1009 18:22:36.741582   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:36.741900   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:36.741954   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:37.241503   34792 type.go:168] "Request Body" body=""
	I1009 18:22:37.241570   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:37.241880   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:37.741492   34792 type.go:168] "Request Body" body=""
	I1009 18:22:37.741564   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:37.741883   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:38.241508   34792 type.go:168] "Request Body" body=""
	I1009 18:22:38.241573   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:38.241878   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:38.741474   34792 type.go:168] "Request Body" body=""
	I1009 18:22:38.741571   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:38.741868   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:39.241856   34792 type.go:168] "Request Body" body=""
	I1009 18:22:39.241916   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:39.242237   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:39.242300   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:39.741898   34792 type.go:168] "Request Body" body=""
	I1009 18:22:39.741969   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:39.742303   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:40.241969   34792 type.go:168] "Request Body" body=""
	I1009 18:22:40.242062   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:40.242400   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:40.741170   34792 type.go:168] "Request Body" body=""
	I1009 18:22:40.741238   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:40.741556   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:41.241169   34792 type.go:168] "Request Body" body=""
	I1009 18:22:41.241235   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:41.241568   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:41.741187   34792 type.go:168] "Request Body" body=""
	I1009 18:22:41.741253   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:41.741589   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:41.741643   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:42.241206   34792 type.go:168] "Request Body" body=""
	I1009 18:22:42.241272   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:42.241611   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:42.741205   34792 type.go:168] "Request Body" body=""
	I1009 18:22:42.741278   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:42.741595   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:43.241190   34792 type.go:168] "Request Body" body=""
	I1009 18:22:43.241258   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:43.241582   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:43.741198   34792 type.go:168] "Request Body" body=""
	I1009 18:22:43.741263   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:43.741575   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:44.241202   34792 type.go:168] "Request Body" body=""
	I1009 18:22:44.241263   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:44.241577   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:44.241629   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:44.741212   34792 type.go:168] "Request Body" body=""
	I1009 18:22:44.741283   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:44.741598   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:45.241235   34792 type.go:168] "Request Body" body=""
	I1009 18:22:45.241301   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:45.241671   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:45.741562   34792 type.go:168] "Request Body" body=""
	I1009 18:22:45.741629   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:45.741942   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:46.241628   34792 type.go:168] "Request Body" body=""
	I1009 18:22:46.241692   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:46.241993   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:46.242063   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:46.741676   34792 type.go:168] "Request Body" body=""
	I1009 18:22:46.741745   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:46.742077   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:46.825331   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:46.875678   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:46.878302   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:46.878331   34792 retry.go:31] will retry after 24.753743157s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:47.241842   34792 type.go:168] "Request Body" body=""
	I1009 18:22:47.241915   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:47.242245   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:47.741025   34792 type.go:168] "Request Body" body=""
	I1009 18:22:47.741128   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:47.741463   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:48.241206   34792 type.go:168] "Request Body" body=""
	I1009 18:22:48.241284   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:48.241641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:48.741361   34792 type.go:168] "Request Body" body=""
	I1009 18:22:48.741434   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:48.741764   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:48.741814   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:48.815023   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:48.866903   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:48.866953   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:48.866975   34792 retry.go:31] will retry after 23.693621864s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:49.241681   34792 type.go:168] "Request Body" body=""
	I1009 18:22:49.241760   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:49.242189   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:49.741809   34792 type.go:168] "Request Body" body=""
	I1009 18:22:49.741872   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:49.742216   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:50.241969   34792 type.go:168] "Request Body" body=""
	I1009 18:22:50.242049   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:50.242406   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:50.741244   34792 type.go:168] "Request Body" body=""
	I1009 18:22:50.741312   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:50.741658   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:51.241250   34792 type.go:168] "Request Body" body=""
	I1009 18:22:51.241336   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:51.241653   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:51.241707   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:51.741250   34792 type.go:168] "Request Body" body=""
	I1009 18:22:51.741317   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:51.741731   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:52.241243   34792 type.go:168] "Request Body" body=""
	I1009 18:22:52.241341   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:52.241668   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:52.741254   34792 type.go:168] "Request Body" body=""
	I1009 18:22:52.741378   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:52.741687   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:53.241293   34792 type.go:168] "Request Body" body=""
	I1009 18:22:53.241355   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:53.241674   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:53.241725   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:53.741263   34792 type.go:168] "Request Body" body=""
	I1009 18:22:53.741330   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:53.741640   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:54.241249   34792 type.go:168] "Request Body" body=""
	I1009 18:22:54.241329   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:54.241652   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:54.741260   34792 type.go:168] "Request Body" body=""
	I1009 18:22:54.741337   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:54.741654   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:55.241278   34792 type.go:168] "Request Body" body=""
	I1009 18:22:55.241342   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:55.241675   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:55.741565   34792 type.go:168] "Request Body" body=""
	I1009 18:22:55.741632   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:55.741942   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:55.741993   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:56.241590   34792 type.go:168] "Request Body" body=""
	I1009 18:22:56.241657   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:56.241967   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:56.741618   34792 type.go:168] "Request Body" body=""
	I1009 18:22:56.741686   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:56.742001   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:57.241690   34792 type.go:168] "Request Body" body=""
	I1009 18:22:57.241747   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:57.242085   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:57.741794   34792 type.go:168] "Request Body" body=""
	I1009 18:22:57.741866   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:57.742231   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:57.742290   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:58.241896   34792 type.go:168] "Request Body" body=""
	I1009 18:22:58.241964   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:58.242341   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:58.740987   34792 type.go:168] "Request Body" body=""
	I1009 18:22:58.741057   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:58.741430   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:59.241270   34792 type.go:168] "Request Body" body=""
	I1009 18:22:59.241374   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:59.241705   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:59.741305   34792 type.go:168] "Request Body" body=""
	I1009 18:22:59.741378   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:59.741671   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:00.241318   34792 type.go:168] "Request Body" body=""
	I1009 18:23:00.241386   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:00.241730   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:00.241783   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:00.741584   34792 type.go:168] "Request Body" body=""
	I1009 18:23:00.741655   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:00.741970   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:01.241670   34792 type.go:168] "Request Body" body=""
	I1009 18:23:01.241740   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:01.242056   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:01.741725   34792 type.go:168] "Request Body" body=""
	I1009 18:23:01.741789   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:01.742109   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:02.241790   34792 type.go:168] "Request Body" body=""
	I1009 18:23:02.241853   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:02.242215   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:02.242270   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:02.741914   34792 type.go:168] "Request Body" body=""
	I1009 18:23:02.741984   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:02.742352   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:03.242008   34792 type.go:168] "Request Body" body=""
	I1009 18:23:03.242088   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:03.242455   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:03.741186   34792 type.go:168] "Request Body" body=""
	I1009 18:23:03.741250   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:03.741576   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:04.241269   34792 type.go:168] "Request Body" body=""
	I1009 18:23:04.241341   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:04.241673   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:04.741396   34792 type.go:168] "Request Body" body=""
	I1009 18:23:04.741460   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:04.741772   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:04.741828   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:05.241582   34792 type.go:168] "Request Body" body=""
	I1009 18:23:05.241646   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:05.241956   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:05.741882   34792 type.go:168] "Request Body" body=""
	I1009 18:23:05.741951   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:05.742320   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:06.241065   34792 type.go:168] "Request Body" body=""
	I1009 18:23:06.241173   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:06.241497   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:06.741232   34792 type.go:168] "Request Body" body=""
	I1009 18:23:06.741295   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:06.741640   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:07.241402   34792 type.go:168] "Request Body" body=""
	I1009 18:23:07.241487   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:07.241813   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:07.241865   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:07.741620   34792 type.go:168] "Request Body" body=""
	I1009 18:23:07.741692   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:07.742021   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:08.241855   34792 type.go:168] "Request Body" body=""
	I1009 18:23:08.241917   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:08.242226   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:08.741000   34792 type.go:168] "Request Body" body=""
	I1009 18:23:08.741070   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:08.741419   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:09.241169   34792 type.go:168] "Request Body" body=""
	I1009 18:23:09.241236   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:09.241556   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:09.741160   34792 type.go:168] "Request Body" body=""
	I1009 18:23:09.741223   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:09.741542   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:09.741611   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:10.241116   34792 type.go:168] "Request Body" body=""
	I1009 18:23:10.241215   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:10.241545   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:10.741472   34792 type.go:168] "Request Body" body=""
	I1009 18:23:10.741586   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:10.741912   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:11.241739   34792 type.go:168] "Request Body" body=""
	I1009 18:23:11.241829   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:11.242195   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:11.632645   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:23:11.684065   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:23:11.686606   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:23:11.686651   34792 retry.go:31] will retry after 43.228082894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:23:11.741902   34792 type.go:168] "Request Body" body=""
	I1009 18:23:11.741967   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:11.742335   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:11.742398   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:12.241111   34792 type.go:168] "Request Body" body=""
	I1009 18:23:12.241221   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:12.241543   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:12.560933   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:23:12.614798   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:23:12.614843   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:23:12.614940   34792 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 18:23:12.741072   34792 type.go:168] "Request Body" body=""
	I1009 18:23:12.741169   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:12.741484   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:13.241057   34792 type.go:168] "Request Body" body=""
	I1009 18:23:13.241192   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:13.241516   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:13.741110   34792 type.go:168] "Request Body" body=""
	I1009 18:23:13.741196   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:13.741493   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:14.241244   34792 type.go:168] "Request Body" body=""
	I1009 18:23:14.241314   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:14.241686   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:14.241738   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:14.741425   34792 type.go:168] "Request Body" body=""
	I1009 18:23:14.741488   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:14.741803   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:15.241603   34792 type.go:168] "Request Body" body=""
	I1009 18:23:15.241664   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:15.241993   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:15.741872   34792 type.go:168] "Request Body" body=""
	I1009 18:23:15.741942   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:15.742284   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:16.241004   34792 type.go:168] "Request Body" body=""
	I1009 18:23:16.241108   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:16.241472   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:16.741281   34792 type.go:168] "Request Body" body=""
	I1009 18:23:16.741357   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:16.741657   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:16.741710   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:17.241427   34792 type.go:168] "Request Body" body=""
	I1009 18:23:17.241489   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:17.241829   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:17.741674   34792 type.go:168] "Request Body" body=""
	I1009 18:23:17.741762   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:17.742082   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:18.241893   34792 type.go:168] "Request Body" body=""
	I1009 18:23:18.241965   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:18.242388   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:18.741175   34792 type.go:168] "Request Body" body=""
	I1009 18:23:18.741239   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:18.741553   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:19.241408   34792 type.go:168] "Request Body" body=""
	I1009 18:23:19.241483   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:19.241852   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:19.241908   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:19.741678   34792 type.go:168] "Request Body" body=""
	I1009 18:23:19.741745   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:19.742039   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:20.241909   34792 type.go:168] "Request Body" body=""
	I1009 18:23:20.241972   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:20.242406   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:20.741268   34792 type.go:168] "Request Body" body=""
	I1009 18:23:20.741334   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:20.741646   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:21.241394   34792 type.go:168] "Request Body" body=""
	I1009 18:23:21.241459   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:21.241801   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:21.741624   34792 type.go:168] "Request Body" body=""
	I1009 18:23:21.741688   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:21.741997   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:21.742063   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:22.241916   34792 type.go:168] "Request Body" body=""
	I1009 18:23:22.241978   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:22.242380   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:22.741197   34792 type.go:168] "Request Body" body=""
	I1009 18:23:22.741265   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:22.741575   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:23.241312   34792 type.go:168] "Request Body" body=""
	I1009 18:23:23.241382   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:23.241731   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:23.741463   34792 type.go:168] "Request Body" body=""
	I1009 18:23:23.741537   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:23.741848   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:24.241654   34792 type.go:168] "Request Body" body=""
	I1009 18:23:24.241717   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:24.242059   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:24.242125   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:24.741910   34792 type.go:168] "Request Body" body=""
	I1009 18:23:24.741982   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:24.742333   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:25.241063   34792 type.go:168] "Request Body" body=""
	I1009 18:23:25.241128   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:25.241505   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:25.741559   34792 type.go:168] "Request Body" body=""
	I1009 18:23:25.741626   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:25.741933   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:26.241874   34792 type.go:168] "Request Body" body=""
	I1009 18:23:26.241956   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:26.242332   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:26.242390   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:26.741061   34792 type.go:168] "Request Body" body=""
	I1009 18:23:26.741125   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:26.741525   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:27.241264   34792 type.go:168] "Request Body" body=""
	I1009 18:23:27.241334   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:27.241644   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:27.741375   34792 type.go:168] "Request Body" body=""
	I1009 18:23:27.741438   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:27.741748   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:28.241487   34792 type.go:168] "Request Body" body=""
	I1009 18:23:28.241553   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:28.241862   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:28.741699   34792 type.go:168] "Request Body" body=""
	I1009 18:23:28.741767   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:28.742072   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:28.742126   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:29.241949   34792 type.go:168] "Request Body" body=""
	I1009 18:23:29.242051   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:29.242384   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:29.741054   34792 type.go:168] "Request Body" body=""
	I1009 18:23:29.741120   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:29.741440   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:30.241213   34792 type.go:168] "Request Body" body=""
	I1009 18:23:30.241289   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:30.241596   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:30.741484   34792 type.go:168] "Request Body" body=""
	I1009 18:23:30.741560   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:30.741926   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:31.241778   34792 type.go:168] "Request Body" body=""
	I1009 18:23:31.241839   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:31.242174   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:31.242227   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:31.740976   34792 type.go:168] "Request Body" body=""
	I1009 18:23:31.741038   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:31.741384   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:32.241106   34792 type.go:168] "Request Body" body=""
	I1009 18:23:32.241215   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:32.241567   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:32.741260   34792 type.go:168] "Request Body" body=""
	I1009 18:23:32.741352   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:32.741640   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:33.241340   34792 type.go:168] "Request Body" body=""
	I1009 18:23:33.241406   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:33.241743   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:33.741456   34792 type.go:168] "Request Body" body=""
	I1009 18:23:33.741516   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:33.741808   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:33.741862   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:34.241631   34792 type.go:168] "Request Body" body=""
	I1009 18:23:34.241695   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:34.242060   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:34.741908   34792 type.go:168] "Request Body" body=""
	I1009 18:23:34.741974   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:34.742307   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:35.241044   34792 type.go:168] "Request Body" body=""
	I1009 18:23:35.241113   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:35.241458   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:35.741288   34792 type.go:168] "Request Body" body=""
	I1009 18:23:35.741356   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:35.741670   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:36.241422   34792 type.go:168] "Request Body" body=""
	I1009 18:23:36.241483   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:36.241820   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:36.241874   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:36.741640   34792 type.go:168] "Request Body" body=""
	I1009 18:23:36.741707   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:36.742009   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:37.241833   34792 type.go:168] "Request Body" body=""
	I1009 18:23:37.241903   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:37.242258   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:37.740969   34792 type.go:168] "Request Body" body=""
	I1009 18:23:37.741033   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:37.741371   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:38.241096   34792 type.go:168] "Request Body" body=""
	I1009 18:23:38.241188   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:38.241533   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:38.741254   34792 type.go:168] "Request Body" body=""
	I1009 18:23:38.741330   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:38.741616   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:38.741669   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:39.241545   34792 type.go:168] "Request Body" body=""
	I1009 18:23:39.241620   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:39.241961   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:39.741751   34792 type.go:168] "Request Body" body=""
	I1009 18:23:39.741816   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:39.742174   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:40.241991   34792 type.go:168] "Request Body" body=""
	I1009 18:23:40.242060   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:40.242448   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:40.741260   34792 type.go:168] "Request Body" body=""
	I1009 18:23:40.741326   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:40.741641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:40.741695   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:41.241401   34792 type.go:168] "Request Body" body=""
	I1009 18:23:41.241463   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:41.241842   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:41.741321   34792 type.go:168] "Request Body" body=""
	I1009 18:23:41.741396   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:41.741709   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:42.241467   34792 type.go:168] "Request Body" body=""
	I1009 18:23:42.241529   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:42.241897   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:42.741700   34792 type.go:168] "Request Body" body=""
	I1009 18:23:42.741768   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:42.742079   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:42.742160   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:43.241914   34792 type.go:168] "Request Body" body=""
	I1009 18:23:43.241973   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:43.242318   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:43.741093   34792 type.go:168] "Request Body" body=""
	I1009 18:23:43.741186   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:43.741513   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:44.241263   34792 type.go:168] "Request Body" body=""
	I1009 18:23:44.241346   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:44.241690   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:44.741269   34792 type.go:168] "Request Body" body=""
	I1009 18:23:44.741339   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:44.741649   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:45.241373   34792 type.go:168] "Request Body" body=""
	I1009 18:23:45.241435   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:45.241795   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:45.241846   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:45.741727   34792 type.go:168] "Request Body" body=""
	I1009 18:23:45.741791   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:45.742097   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:46.241926   34792 type.go:168] "Request Body" body=""
	I1009 18:23:46.241996   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:46.242356   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:46.741120   34792 type.go:168] "Request Body" body=""
	I1009 18:23:46.741209   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:46.741602   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:47.241322   34792 type.go:168] "Request Body" body=""
	I1009 18:23:47.241391   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:47.241768   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:47.741575   34792 type.go:168] "Request Body" body=""
	I1009 18:23:47.741638   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:47.741939   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:47.741988   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:48.241711   34792 type.go:168] "Request Body" body=""
	I1009 18:23:48.241771   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:48.242111   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:48.741933   34792 type.go:168] "Request Body" body=""
	I1009 18:23:48.742004   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:48.742339   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:49.241046   34792 type.go:168] "Request Body" body=""
	I1009 18:23:49.241123   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:49.241511   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:49.741243   34792 type.go:168] "Request Body" body=""
	I1009 18:23:49.741308   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:49.741638   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:50.241345   34792 type.go:168] "Request Body" body=""
	I1009 18:23:50.241408   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:50.241740   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:50.241790   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:50.741667   34792 type.go:168] "Request Body" body=""
	I1009 18:23:50.741736   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:50.742048   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:51.241420   34792 type.go:168] "Request Body" body=""
	I1009 18:23:51.241491   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:51.241828   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:51.741669   34792 type.go:168] "Request Body" body=""
	I1009 18:23:51.741742   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:51.742050   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:52.241911   34792 type.go:168] "Request Body" body=""
	I1009 18:23:52.241973   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:52.242345   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:52.242396   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:52.741096   34792 type.go:168] "Request Body" body=""
	I1009 18:23:52.741186   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:52.741495   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:53.241277   34792 type.go:168] "Request Body" body=""
	I1009 18:23:53.241348   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:53.241731   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:53.741468   34792 type.go:168] "Request Body" body=""
	I1009 18:23:53.741553   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:53.741866   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:54.241666   34792 type.go:168] "Request Body" body=""
	I1009 18:23:54.241732   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:54.242078   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:54.741932   34792 type.go:168] "Request Body" body=""
	I1009 18:23:54.741997   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:54.742359   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:54.742411   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:54.915717   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:23:54.969064   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:23:54.969123   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:23:54.969226   34792 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 18:23:54.971206   34792 out.go:179] * Enabled addons: 
	I1009 18:23:54.972204   34792 addons.go:514] duration metric: took 1m55.845883827s for enable addons: enabled=[]
	I1009 18:23:55.241550   34792 type.go:168] "Request Body" body=""
	I1009 18:23:55.241625   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:55.241961   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:55.741824   34792 type.go:168] "Request Body" body=""
	I1009 18:23:55.741904   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:55.742290   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:56.241973   34792 type.go:168] "Request Body" body=""
	I1009 18:23:56.242123   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:56.242483   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:56.741036   34792 type.go:168] "Request Body" body=""
	I1009 18:23:56.741152   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:56.741467   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:57.241090   34792 type.go:168] "Request Body" body=""
	I1009 18:23:57.241200   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:57.241560   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:57.241611   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:57.741252   34792 type.go:168] "Request Body" body=""
	I1009 18:23:57.741334   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:57.741629   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:58.241447   34792 type.go:168] "Request Body" body=""
	I1009 18:23:58.241725   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:58.242009   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:58.741244   34792 type.go:168] "Request Body" body=""
	I1009 18:23:58.741314   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:58.741649   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:59.241582   34792 type.go:168] "Request Body" body=""
	I1009 18:23:59.241664   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:59.241976   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:59.242029   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:59.741645   34792 type.go:168] "Request Body" body=""
	I1009 18:23:59.741711   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:59.742016   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:00.241679   34792 type.go:168] "Request Body" body=""
	I1009 18:24:00.241745   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:00.242104   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:00.741941   34792 type.go:168] "Request Body" body=""
	I1009 18:24:00.742015   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:00.742375   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:01.240979   34792 type.go:168] "Request Body" body=""
	I1009 18:24:01.241079   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:01.241446   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:01.741104   34792 type.go:168] "Request Body" body=""
	I1009 18:24:01.741198   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:01.741536   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:01.741587   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:02.241191   34792 type.go:168] "Request Body" body=""
	I1009 18:24:02.241259   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:02.241560   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:02.741155   34792 type.go:168] "Request Body" body=""
	I1009 18:24:02.741230   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:02.741560   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:03.241230   34792 type.go:168] "Request Body" body=""
	I1009 18:24:03.241291   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:03.241606   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:03.741234   34792 type.go:168] "Request Body" body=""
	I1009 18:24:03.741320   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:03.741610   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:03.741659   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:04.241477   34792 type.go:168] "Request Body" body=""
	I1009 18:24:04.241610   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:04.241994   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:04.741666   34792 type.go:168] "Request Body" body=""
	I1009 18:24:04.741733   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:04.742049   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:05.241727   34792 type.go:168] "Request Body" body=""
	I1009 18:24:05.241807   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:05.242113   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:05.741949   34792 type.go:168] "Request Body" body=""
	I1009 18:24:05.742014   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:05.742361   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:05.742412   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:06.240966   34792 type.go:168] "Request Body" body=""
	I1009 18:24:06.241087   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:06.241438   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:06.741043   34792 type.go:168] "Request Body" body=""
	I1009 18:24:06.741125   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:06.741482   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:07.241180   34792 type.go:168] "Request Body" body=""
	I1009 18:24:07.241242   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:07.241557   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:07.741167   34792 type.go:168] "Request Body" body=""
	I1009 18:24:07.741259   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:07.741613   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:08.241236   34792 type.go:168] "Request Body" body=""
	I1009 18:24:08.241302   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:08.241607   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:08.241657   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:08.741270   34792 type.go:168] "Request Body" body=""
	I1009 18:24:08.741337   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:08.741689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:09.241656   34792 type.go:168] "Request Body" body=""
	I1009 18:24:09.241721   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:09.242060   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:09.741758   34792 type.go:168] "Request Body" body=""
	I1009 18:24:09.741832   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:09.742204   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:10.241854   34792 type.go:168] "Request Body" body=""
	I1009 18:24:10.241948   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:10.242297   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:10.242356   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:10.740989   34792 type.go:168] "Request Body" body=""
	I1009 18:24:10.741064   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:10.741405   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:11.242008   34792 type.go:168] "Request Body" body=""
	I1009 18:24:11.242096   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:11.242414   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:11.741019   34792 type.go:168] "Request Body" body=""
	I1009 18:24:11.741090   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:11.741443   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:12.241051   34792 type.go:168] "Request Body" body=""
	I1009 18:24:12.241127   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:12.241488   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:12.741129   34792 type.go:168] "Request Body" body=""
	I1009 18:24:12.741226   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:12.741564   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:12.741614   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:13.241115   34792 type.go:168] "Request Body" body=""
	I1009 18:24:13.241208   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:13.241540   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:13.741171   34792 type.go:168] "Request Body" body=""
	I1009 18:24:13.741235   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:13.741549   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:14.241221   34792 type.go:168] "Request Body" body=""
	I1009 18:24:14.241289   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:14.241613   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:14.741228   34792 type.go:168] "Request Body" body=""
	I1009 18:24:14.741294   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:14.741619   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:14.741670   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:15.241203   34792 type.go:168] "Request Body" body=""
	I1009 18:24:15.241266   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:15.241587   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:15.741480   34792 type.go:168] "Request Body" body=""
	I1009 18:24:15.741544   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:15.741911   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:16.241491   34792 type.go:168] "Request Body" body=""
	I1009 18:24:16.241558   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:16.241870   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:16.741517   34792 type.go:168] "Request Body" body=""
	I1009 18:24:16.741585   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:16.741911   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:16.741963   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:17.241588   34792 type.go:168] "Request Body" body=""
	I1009 18:24:17.241650   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:17.241989   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:17.741644   34792 type.go:168] "Request Body" body=""
	I1009 18:24:17.741710   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:17.742011   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:18.241688   34792 type.go:168] "Request Body" body=""
	I1009 18:24:18.241755   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:18.242125   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:18.741790   34792 type.go:168] "Request Body" body=""
	I1009 18:24:18.741854   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:18.742223   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:18.742290   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:19.242039   34792 type.go:168] "Request Body" body=""
	I1009 18:24:19.242109   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:19.242472   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:19.741076   34792 type.go:168] "Request Body" body=""
	I1009 18:24:19.741162   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:19.741541   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:20.241117   34792 type.go:168] "Request Body" body=""
	I1009 18:24:20.241204   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:20.241525   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:20.741486   34792 type.go:168] "Request Body" body=""
	I1009 18:24:20.741556   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:20.741868   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:21.241426   34792 type.go:168] "Request Body" body=""
	I1009 18:24:21.241498   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:21.241806   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:21.241862   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:21.741431   34792 type.go:168] "Request Body" body=""
	I1009 18:24:21.741537   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:21.741868   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:22.241461   34792 type.go:168] "Request Body" body=""
	I1009 18:24:22.241535   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:22.241849   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:22.741438   34792 type.go:168] "Request Body" body=""
	I1009 18:24:22.741501   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:22.741846   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:23.241408   34792 type.go:168] "Request Body" body=""
	I1009 18:24:23.241477   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:23.241783   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:23.741400   34792 type.go:168] "Request Body" body=""
	I1009 18:24:23.741470   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:23.741789   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:23.741845   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:24.241359   34792 type.go:168] "Request Body" body=""
	I1009 18:24:24.241431   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:24.241755   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:24.741348   34792 type.go:168] "Request Body" body=""
	I1009 18:24:24.741408   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:24.741733   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:25.241293   34792 type.go:168] "Request Body" body=""
	I1009 18:24:25.241374   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:25.241694   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:25.741621   34792 type.go:168] "Request Body" body=""
	I1009 18:24:25.741682   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:25.742037   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:25.742088   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:26.241707   34792 type.go:168] "Request Body" body=""
	I1009 18:24:26.241774   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:26.242098   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:26.741808   34792 type.go:168] "Request Body" body=""
	I1009 18:24:26.741871   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:26.742236   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:27.241893   34792 type.go:168] "Request Body" body=""
	I1009 18:24:27.241957   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:27.242307   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:27.741971   34792 type.go:168] "Request Body" body=""
	I1009 18:24:27.742039   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:27.742363   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:27.742412   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:28.240944   34792 type.go:168] "Request Body" body=""
	I1009 18:24:28.241012   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:28.241383   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:28.740967   34792 type.go:168] "Request Body" body=""
	I1009 18:24:28.741047   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:28.741411   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:29.241219   34792 type.go:168] "Request Body" body=""
	I1009 18:24:29.241290   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:29.241653   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:29.741274   34792 type.go:168] "Request Body" body=""
	I1009 18:24:29.741345   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:29.741655   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:30.241249   34792 type.go:168] "Request Body" body=""
	I1009 18:24:30.241326   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:30.241636   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:30.241689   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:30.741565   34792 type.go:168] "Request Body" body=""
	I1009 18:24:30.741637   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:30.741952   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:31.241609   34792 type.go:168] "Request Body" body=""
	I1009 18:24:31.241669   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:31.242013   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:31.741661   34792 type.go:168] "Request Body" body=""
	I1009 18:24:31.741727   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:31.742040   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:32.241675   34792 type.go:168] "Request Body" body=""
	I1009 18:24:32.241739   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:32.242047   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:32.242100   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:32.741353   34792 type.go:168] "Request Body" body=""
	I1009 18:24:32.741425   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:32.741746   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:33.241341   34792 type.go:168] "Request Body" body=""
	I1009 18:24:33.241401   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:33.241718   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:33.741321   34792 type.go:168] "Request Body" body=""
	I1009 18:24:33.741388   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:33.741692   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:34.241262   34792 type.go:168] "Request Body" body=""
	I1009 18:24:34.241326   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:34.241641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:34.741266   34792 type.go:168] "Request Body" body=""
	I1009 18:24:34.741339   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:34.741686   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:34.741740   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:35.241256   34792 type.go:168] "Request Body" body=""
	I1009 18:24:35.241332   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:35.241644   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:35.741557   34792 type.go:168] "Request Body" body=""
	I1009 18:24:35.741623   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:35.741960   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:36.241631   34792 type.go:168] "Request Body" body=""
	I1009 18:24:36.241698   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:36.242094   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:36.741738   34792 type.go:168] "Request Body" body=""
	I1009 18:24:36.741810   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:36.742164   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:36.742232   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:37.241811   34792 type.go:168] "Request Body" body=""
	I1009 18:24:37.241879   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:37.242219   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:37.741906   34792 type.go:168] "Request Body" body=""
	I1009 18:24:37.741972   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:37.742360   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:38.241974   34792 type.go:168] "Request Body" body=""
	I1009 18:24:38.242032   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:38.242406   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:38.740970   34792 type.go:168] "Request Body" body=""
	I1009 18:24:38.741038   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:38.741400   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:39.241238   34792 type.go:168] "Request Body" body=""
	I1009 18:24:39.241302   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:39.241642   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:39.241695   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:39.741304   34792 type.go:168] "Request Body" body=""
	I1009 18:24:39.741370   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:39.741689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:40.241283   34792 type.go:168] "Request Body" body=""
	I1009 18:24:40.241349   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:40.241689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:40.741596   34792 type.go:168] "Request Body" body=""
	I1009 18:24:40.741665   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:40.741992   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:41.241775   34792 type.go:168] "Request Body" body=""
	I1009 18:24:41.241853   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:41.242210   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:41.242282   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:41.741904   34792 type.go:168] "Request Body" body=""
	I1009 18:24:41.741970   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:41.742352   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:42.240959   34792 type.go:168] "Request Body" body=""
	I1009 18:24:42.241085   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:42.241411   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:42.741000   34792 type.go:168] "Request Body" body=""
	I1009 18:24:42.741063   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:42.741398   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:43.242037   34792 type.go:168] "Request Body" body=""
	I1009 18:24:43.242129   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:43.242476   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:43.242528   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:43.741058   34792 type.go:168] "Request Body" body=""
	I1009 18:24:43.741124   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:43.741463   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:44.241058   34792 type.go:168] "Request Body" body=""
	I1009 18:24:44.241159   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:44.241499   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:44.741068   34792 type.go:168] "Request Body" body=""
	I1009 18:24:44.741159   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:44.741472   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:45.241073   34792 type.go:168] "Request Body" body=""
	I1009 18:24:45.241155   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:45.241482   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:45.741464   34792 type.go:168] "Request Body" body=""
	I1009 18:24:45.741533   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:45.741834   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:45.741888   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:46.241484   34792 type.go:168] "Request Body" body=""
	I1009 18:24:46.241552   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:46.241885   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:46.741462   34792 type.go:168] "Request Body" body=""
	I1009 18:24:46.741538   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:46.741838   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:47.241422   34792 type.go:168] "Request Body" body=""
	I1009 18:24:47.241483   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:47.241808   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:47.741360   34792 type.go:168] "Request Body" body=""
	I1009 18:24:47.741425   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:47.741734   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:48.241415   34792 type.go:168] "Request Body" body=""
	I1009 18:24:48.241480   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:48.241802   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:48.241867   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:48.741335   34792 type.go:168] "Request Body" body=""
	I1009 18:24:48.741399   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:48.741718   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:49.241753   34792 type.go:168] "Request Body" body=""
	I1009 18:24:49.241820   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:49.242187   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:49.741848   34792 type.go:168] "Request Body" body=""
	I1009 18:24:49.741914   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:49.742284   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:50.242049   34792 type.go:168] "Request Body" body=""
	I1009 18:24:50.242115   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:50.242449   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:50.242500   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:50.741086   34792 type.go:168] "Request Body" body=""
	I1009 18:24:50.741198   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:50.741527   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:51.241098   34792 type.go:168] "Request Body" body=""
	I1009 18:24:51.241186   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:51.241495   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:51.741082   34792 type.go:168] "Request Body" body=""
	I1009 18:24:51.741183   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:51.741522   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:52.241121   34792 type.go:168] "Request Body" body=""
	I1009 18:24:52.241212   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:52.241508   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:52.741094   34792 type.go:168] "Request Body" body=""
	I1009 18:24:52.741203   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:52.741514   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:52.741572   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:53.241090   34792 type.go:168] "Request Body" body=""
	I1009 18:24:53.241183   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:53.241580   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:53.741218   34792 type.go:168] "Request Body" body=""
	I1009 18:24:53.741300   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:53.741630   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:54.241270   34792 type.go:168] "Request Body" body=""
	I1009 18:24:54.241352   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:54.241658   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:54.741241   34792 type.go:168] "Request Body" body=""
	I1009 18:24:54.741321   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:54.741636   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:54.741687   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:55.241234   34792 type.go:168] "Request Body" body=""
	I1009 18:24:55.241306   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:55.241626   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:55.741410   34792 type.go:168] "Request Body" body=""
	I1009 18:24:55.741479   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:55.741852   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:56.241427   34792 type.go:168] "Request Body" body=""
	I1009 18:24:56.241491   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:56.241834   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:56.741423   34792 type.go:168] "Request Body" body=""
	I1009 18:24:56.741492   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:56.741854   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:56.741921   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:57.241419   34792 type.go:168] "Request Body" body=""
	I1009 18:24:57.241484   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:57.241784   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:57.741337   34792 type.go:168] "Request Body" body=""
	I1009 18:24:57.741402   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:57.741768   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:58.241353   34792 type.go:168] "Request Body" body=""
	I1009 18:24:58.241420   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:58.241723   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:58.741285   34792 type.go:168] "Request Body" body=""
	I1009 18:24:58.741356   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:58.741698   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:59.241536   34792 type.go:168] "Request Body" body=""
	I1009 18:24:59.241601   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:59.241906   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:59.241970   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:59.741466   34792 type.go:168] "Request Body" body=""
	I1009 18:24:59.741528   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:59.741866   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:00.241421   34792 type.go:168] "Request Body" body=""
	I1009 18:25:00.241487   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:00.241800   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:00.741667   34792 type.go:168] "Request Body" body=""
	I1009 18:25:00.741748   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:00.742076   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:01.241775   34792 type.go:168] "Request Body" body=""
	I1009 18:25:01.241841   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:01.242226   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:01.242284   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:01.741879   34792 type.go:168] "Request Body" body=""
	I1009 18:25:01.741957   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:01.742330   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:02.241978   34792 type.go:168] "Request Body" body=""
	I1009 18:25:02.242041   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:02.242423   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:02.741029   34792 type.go:168] "Request Body" body=""
	I1009 18:25:02.741115   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:02.741462   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:03.241086   34792 type.go:168] "Request Body" body=""
	I1009 18:25:03.241179   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:03.241501   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:03.741018   34792 type.go:168] "Request Body" body=""
	I1009 18:25:03.741114   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:03.741476   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:03.741528   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:04.241053   34792 type.go:168] "Request Body" body=""
	I1009 18:25:04.241116   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:04.241452   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:04.741007   34792 type.go:168] "Request Body" body=""
	I1009 18:25:04.741083   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:04.741445   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:05.241037   34792 type.go:168] "Request Body" body=""
	I1009 18:25:05.241100   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:05.241427   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:05.741247   34792 type.go:168] "Request Body" body=""
	I1009 18:25:05.741321   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:05.741697   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:05.741771   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:06.241254   34792 type.go:168] "Request Body" body=""
	I1009 18:25:06.241327   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:06.241639   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:06.741286   34792 type.go:168] "Request Body" body=""
	I1009 18:25:06.741366   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:06.741735   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:07.241253   34792 type.go:168] "Request Body" body=""
	I1009 18:25:07.241322   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:07.241625   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:07.741217   34792 type.go:168] "Request Body" body=""
	I1009 18:25:07.741279   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:07.741640   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:08.241244   34792 type.go:168] "Request Body" body=""
	I1009 18:25:08.241315   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:08.241647   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:08.241711   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:08.741241   34792 type.go:168] "Request Body" body=""
	I1009 18:25:08.741304   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:08.741686   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:09.241716   34792 type.go:168] "Request Body" body=""
	I1009 18:25:09.241782   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:09.242124   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:09.741814   34792 type.go:168] "Request Body" body=""
	I1009 18:25:09.741880   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:09.742241   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:10.241918   34792 type.go:168] "Request Body" body=""
	I1009 18:25:10.241983   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:10.242339   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:10.242405   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:10.741070   34792 type.go:168] "Request Body" body=""
	I1009 18:25:10.741194   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:10.741554   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:11.241213   34792 type.go:168] "Request Body" body=""
	I1009 18:25:11.241281   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:11.241588   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:11.741236   34792 type.go:168] "Request Body" body=""
	I1009 18:25:11.741322   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:11.741656   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:12.241283   34792 type.go:168] "Request Body" body=""
	I1009 18:25:12.241345   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:12.241648   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:12.741253   34792 type.go:168] "Request Body" body=""
	I1009 18:25:12.741341   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:12.741670   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:12.741727   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:13.241274   34792 type.go:168] "Request Body" body=""
	I1009 18:25:13.241352   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:13.241660   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:13.741258   34792 type.go:168] "Request Body" body=""
	I1009 18:25:13.741346   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:13.741679   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:14.241260   34792 type.go:168] "Request Body" body=""
	I1009 18:25:14.241333   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:14.241686   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:14.741277   34792 type.go:168] "Request Body" body=""
	I1009 18:25:14.741354   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:14.741682   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:15.241247   34792 type.go:168] "Request Body" body=""
	I1009 18:25:15.241309   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:15.241612   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:15.241669   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:15.741488   34792 type.go:168] "Request Body" body=""
	I1009 18:25:15.741552   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:15.741890   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:16.241468   34792 type.go:168] "Request Body" body=""
	I1009 18:25:16.241537   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:16.241842   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:16.741415   34792 type.go:168] "Request Body" body=""
	I1009 18:25:16.741480   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:16.741850   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:17.241442   34792 type.go:168] "Request Body" body=""
	I1009 18:25:17.241504   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:17.241800   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:17.241861   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:17.741344   34792 type.go:168] "Request Body" body=""
	I1009 18:25:17.741411   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:17.741764   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:18.241362   34792 type.go:168] "Request Body" body=""
	I1009 18:25:18.241432   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:18.241786   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:18.741325   34792 type.go:168] "Request Body" body=""
	I1009 18:25:18.741390   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:18.741723   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:19.241633   34792 type.go:168] "Request Body" body=""
	I1009 18:25:19.241702   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:19.242011   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:19.242081   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:19.741669   34792 type.go:168] "Request Body" body=""
	I1009 18:25:19.741733   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:19.742064   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:20.241763   34792 type.go:168] "Request Body" body=""
	I1009 18:25:20.241826   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:20.242186   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:20.742053   34792 type.go:168] "Request Body" body=""
	I1009 18:25:20.742131   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:20.742513   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:21.241071   34792 type.go:168] "Request Body" body=""
	I1009 18:25:21.241171   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:21.241504   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:21.741088   34792 type.go:168] "Request Body" body=""
	I1009 18:25:21.741207   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:21.741536   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:21.741594   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:22.241126   34792 type.go:168] "Request Body" body=""
	I1009 18:25:22.241221   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:22.241545   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:22.741131   34792 type.go:168] "Request Body" body=""
	I1009 18:25:22.741233   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:22.741588   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:23.241178   34792 type.go:168] "Request Body" body=""
	I1009 18:25:23.241242   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:23.241568   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:23.741162   34792 type.go:168] "Request Body" body=""
	I1009 18:25:23.741242   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:23.741577   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:23.741627   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:24.241178   34792 type.go:168] "Request Body" body=""
	I1009 18:25:24.241246   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:24.241578   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:24.741188   34792 type.go:168] "Request Body" body=""
	I1009 18:25:24.741295   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:24.741619   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:25.241208   34792 type.go:168] "Request Body" body=""
	I1009 18:25:25.241275   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:25.241641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:25.741538   34792 type.go:168] "Request Body" body=""
	I1009 18:25:25.741597   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:25.741905   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:25.741979   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:26.241464   34792 type.go:168] "Request Body" body=""
	I1009 18:25:26.241527   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:26.241835   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:26.741401   34792 type.go:168] "Request Body" body=""
	I1009 18:25:26.741467   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:26.741780   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:27.241351   34792 type.go:168] "Request Body" body=""
	I1009 18:25:27.241416   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:27.241723   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:27.741308   34792 type.go:168] "Request Body" body=""
	I1009 18:25:27.741383   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:27.741695   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:28.241262   34792 type.go:168] "Request Body" body=""
	I1009 18:25:28.241331   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:28.241634   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:28.241696   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:28.741253   34792 type.go:168] "Request Body" body=""
	I1009 18:25:28.741315   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:28.741626   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:29.241574   34792 type.go:168] "Request Body" body=""
	I1009 18:25:29.241643   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:29.241986   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:29.741657   34792 type.go:168] "Request Body" body=""
	I1009 18:25:29.741719   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:29.742063   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:30.241739   34792 type.go:168] "Request Body" body=""
	I1009 18:25:30.241804   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:30.242168   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:30.242230   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:30.741968   34792 type.go:168] "Request Body" body=""
	I1009 18:25:30.742100   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:30.742470   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:31.241076   34792 type.go:168] "Request Body" body=""
	I1009 18:25:31.241171   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:31.241532   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:31.741177   34792 type.go:168] "Request Body" body=""
	I1009 18:25:31.741282   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:31.741624   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:32.241262   34792 type.go:168] "Request Body" body=""
	I1009 18:25:32.241340   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:32.241670   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:32.741275   34792 type.go:168] "Request Body" body=""
	I1009 18:25:32.741360   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:32.741742   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:32.741796   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:33.241329   34792 type.go:168] "Request Body" body=""
	I1009 18:25:33.241396   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:33.241697   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:33.741289   34792 type.go:168] "Request Body" body=""
	I1009 18:25:33.741384   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:33.741759   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:34.241368   34792 type.go:168] "Request Body" body=""
	I1009 18:25:34.241439   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:34.241760   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:34.741351   34792 type.go:168] "Request Body" body=""
	I1009 18:25:34.741428   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:34.741798   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:34.741864   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:35.241399   34792 type.go:168] "Request Body" body=""
	I1009 18:25:35.241491   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:35.241838   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:35.741772   34792 type.go:168] "Request Body" body=""
	I1009 18:25:35.741836   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:35.742224   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:36.242003   34792 type.go:168] "Request Body" body=""
	I1009 18:25:36.242076   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:36.242435   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:36.741028   34792 type.go:168] "Request Body" body=""
	I1009 18:25:36.741097   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:36.741464   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:37.241121   34792 type.go:168] "Request Body" body=""
	I1009 18:25:37.241212   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:37.241551   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:37.241620   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:37.741109   34792 type.go:168] "Request Body" body=""
	I1009 18:25:37.741219   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:37.741567   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:38.241177   34792 type.go:168] "Request Body" body=""
	I1009 18:25:38.241246   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:38.241629   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:38.741262   34792 type.go:168] "Request Body" body=""
	I1009 18:25:38.741325   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:38.741654   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:39.241652   34792 type.go:168] "Request Body" body=""
	I1009 18:25:39.241726   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:39.242067   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:39.242125   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:39.741736   34792 type.go:168] "Request Body" body=""
	I1009 18:25:39.741806   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:39.742215   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:40.241891   34792 type.go:168] "Request Body" body=""
	I1009 18:25:40.241956   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:40.242334   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:40.741050   34792 type.go:168] "Request Body" body=""
	I1009 18:25:40.741121   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:40.741479   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:41.241091   34792 type.go:168] "Request Body" body=""
	I1009 18:25:41.241192   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:41.241525   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:41.741118   34792 type.go:168] "Request Body" body=""
	I1009 18:25:41.741208   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:41.741569   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:41.741626   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:42.241220   34792 type.go:168] "Request Body" body=""
	I1009 18:25:42.241296   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:42.241609   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:42.741251   34792 type.go:168] "Request Body" body=""
	I1009 18:25:42.741318   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:42.741643   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:43.241341   34792 type.go:168] "Request Body" body=""
	I1009 18:25:43.241412   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:43.241736   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:43.741353   34792 type.go:168] "Request Body" body=""
	I1009 18:25:43.741418   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:43.741732   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:43.741785   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:44.241361   34792 type.go:168] "Request Body" body=""
	I1009 18:25:44.241434   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:44.241757   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:44.741332   34792 type.go:168] "Request Body" body=""
	I1009 18:25:44.741401   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:44.741760   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:45.241363   34792 type.go:168] "Request Body" body=""
	I1009 18:25:45.241438   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:45.241819   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:45.741752   34792 type.go:168] "Request Body" body=""
	I1009 18:25:45.741826   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:45.742224   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:45.742282   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:46.241931   34792 type.go:168] "Request Body" body=""
	I1009 18:25:46.242008   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:46.242395   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:46.740984   34792 type.go:168] "Request Body" body=""
	I1009 18:25:46.741081   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:46.741473   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:47.241027   34792 type.go:168] "Request Body" body=""
	I1009 18:25:47.241148   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:47.241536   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:47.741035   34792 type.go:168] "Request Body" body=""
	I1009 18:25:47.741101   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:47.741554   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:48.241082   34792 type.go:168] "Request Body" body=""
	I1009 18:25:48.241179   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:48.241496   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:48.241548   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:48.741082   34792 type.go:168] "Request Body" body=""
	I1009 18:25:48.741203   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:48.741562   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:49.241540   34792 type.go:168] "Request Body" body=""
	I1009 18:25:49.241609   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:49.241992   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:49.741668   34792 type.go:168] "Request Body" body=""
	I1009 18:25:49.741737   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:49.742062   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:50.241713   34792 type.go:168] "Request Body" body=""
	I1009 18:25:50.241779   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:50.242089   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:50.242165   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:50.741969   34792 type.go:168] "Request Body" body=""
	I1009 18:25:50.742080   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:50.742425   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:51.241055   34792 type.go:168] "Request Body" body=""
	I1009 18:25:51.241121   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:51.241485   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:51.741082   34792 type.go:168] "Request Body" body=""
	I1009 18:25:51.741170   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:51.741493   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:52.241115   34792 type.go:168] "Request Body" body=""
	I1009 18:25:52.241209   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:52.241541   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:52.741234   34792 type.go:168] "Request Body" body=""
	I1009 18:25:52.741307   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:52.741661   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:52.741713   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:53.241239   34792 type.go:168] "Request Body" body=""
	I1009 18:25:53.241326   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:53.241653   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:53.741250   34792 type.go:168] "Request Body" body=""
	I1009 18:25:53.741330   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:53.741655   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:54.241252   34792 type.go:168] "Request Body" body=""
	I1009 18:25:54.241357   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:54.241717   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:54.741298   34792 type.go:168] "Request Body" body=""
	I1009 18:25:54.741362   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:54.741680   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:54.741732   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:55.241249   34792 type.go:168] "Request Body" body=""
	I1009 18:25:55.241310   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:55.241707   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:55.741639   34792 type.go:168] "Request Body" body=""
	I1009 18:25:55.741703   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:55.742036   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:56.241666   34792 type.go:168] "Request Body" body=""
	I1009 18:25:56.241729   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:56.242065   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:56.741838   34792 type.go:168] "Request Body" body=""
	I1009 18:25:56.741901   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:56.742249   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:56.742310   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:57.241936   34792 type.go:168] "Request Body" body=""
	I1009 18:25:57.242047   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:57.242403   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:57.741073   34792 type.go:168] "Request Body" body=""
	I1009 18:25:57.741156   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:57.741453   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:58.241102   34792 type.go:168] "Request Body" body=""
	I1009 18:25:58.241189   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:58.241532   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:58.741625   34792 type.go:168] "Request Body" body=""
	I1009 18:25:58.741731   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:58.742069   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:59.241918   34792 type.go:168] "Request Body" body=""
	I1009 18:25:59.242002   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:59.242382   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:59.242433   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:59.741586   34792 type.go:168] "Request Body" body=""
	I1009 18:25:59.741680   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:59.742047   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:00.241712   34792 type.go:168] "Request Body" body=""
	I1009 18:26:00.241778   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:00.242123   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:00.741944   34792 type.go:168] "Request Body" body=""
	I1009 18:26:00.742006   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:00.742335   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:01.241998   34792 type.go:168] "Request Body" body=""
	I1009 18:26:01.242063   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:01.242409   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:01.242463   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:01.740980   34792 type.go:168] "Request Body" body=""
	I1009 18:26:01.741043   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:01.741380   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:02.240968   34792 type.go:168] "Request Body" body=""
	I1009 18:26:02.241034   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:02.241387   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:02.740965   34792 type.go:168] "Request Body" body=""
	I1009 18:26:02.741036   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:02.741361   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:03.241979   34792 type.go:168] "Request Body" body=""
	I1009 18:26:03.242041   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:03.242370   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:03.740968   34792 type.go:168] "Request Body" body=""
	I1009 18:26:03.741033   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:03.741362   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:03.741412   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:04.242040   34792 type.go:168] "Request Body" body=""
	I1009 18:26:04.242108   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:04.242468   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:04.741070   34792 type.go:168] "Request Body" body=""
	I1009 18:26:04.741158   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:04.741484   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:05.241044   34792 type.go:168] "Request Body" body=""
	I1009 18:26:05.241107   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:05.241461   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:05.741242   34792 type.go:168] "Request Body" body=""
	I1009 18:26:05.741305   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:05.741627   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:05.741678   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:06.241201   34792 type.go:168] "Request Body" body=""
	I1009 18:26:06.241271   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:06.241594   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:06.741216   34792 type.go:168] "Request Body" body=""
	I1009 18:26:06.741302   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:06.741638   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:07.241228   34792 type.go:168] "Request Body" body=""
	I1009 18:26:07.241309   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:07.241642   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:07.741295   34792 type.go:168] "Request Body" body=""
	I1009 18:26:07.741364   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:07.741662   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:07.741715   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:08.241237   34792 type.go:168] "Request Body" body=""
	I1009 18:26:08.241302   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:08.241600   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:08.741196   34792 type.go:168] "Request Body" body=""
	I1009 18:26:08.741257   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:08.741600   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:09.241564   34792 type.go:168] "Request Body" body=""
	I1009 18:26:09.241629   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:09.241949   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:09.741615   34792 type.go:168] "Request Body" body=""
	I1009 18:26:09.741680   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:09.741985   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:09.742040   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:10.241636   34792 type.go:168] "Request Body" body=""
	I1009 18:26:10.241706   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:10.242002   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:10.741894   34792 type.go:168] "Request Body" body=""
	I1009 18:26:10.741959   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:10.742285   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:11.241928   34792 type.go:168] "Request Body" body=""
	I1009 18:26:11.241997   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:11.242350   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:11.742032   34792 type.go:168] "Request Body" body=""
	I1009 18:26:11.742100   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:11.742451   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:11.742508   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:12.241054   34792 type.go:168] "Request Body" body=""
	I1009 18:26:12.241123   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:12.241536   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:12.741176   34792 type.go:168] "Request Body" body=""
	I1009 18:26:12.741242   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:12.741599   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:13.241179   34792 type.go:168] "Request Body" body=""
	I1009 18:26:13.241237   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:13.241552   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:13.741164   34792 type.go:168] "Request Body" body=""
	I1009 18:26:13.741229   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:13.741597   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:14.241174   34792 type.go:168] "Request Body" body=""
	I1009 18:26:14.241246   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:14.241576   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:14.241632   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:14.741184   34792 type.go:168] "Request Body" body=""
	I1009 18:26:14.741250   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:14.741553   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:15.241116   34792 type.go:168] "Request Body" body=""
	I1009 18:26:15.241224   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:15.241537   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:15.741317   34792 type.go:168] "Request Body" body=""
	I1009 18:26:15.741389   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:15.741689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:16.241241   34792 type.go:168] "Request Body" body=""
	I1009 18:26:16.241305   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:16.241632   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:16.241683   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:16.741260   34792 type.go:168] "Request Body" body=""
	I1009 18:26:16.741325   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:16.741630   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:17.241224   34792 type.go:168] "Request Body" body=""
	I1009 18:26:17.241286   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:17.241599   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:17.741225   34792 type.go:168] "Request Body" body=""
	I1009 18:26:17.741291   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:17.741594   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:18.241198   34792 type.go:168] "Request Body" body=""
	I1009 18:26:18.241264   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:18.241577   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:18.741185   34792 type.go:168] "Request Body" body=""
	I1009 18:26:18.741257   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:18.741577   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:18.741626   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:19.241353   34792 type.go:168] "Request Body" body=""
	I1009 18:26:19.241426   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:19.241744   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:19.741299   34792 type.go:168] "Request Body" body=""
	I1009 18:26:19.741364   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:19.741663   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:20.241246   34792 type.go:168] "Request Body" body=""
	I1009 18:26:20.241316   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:20.241629   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:20.741541   34792 type.go:168] "Request Body" body=""
	I1009 18:26:20.741607   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:20.741914   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:20.741966   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:21.241518   34792 type.go:168] "Request Body" body=""
	I1009 18:26:21.241583   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:21.241885   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:21.741448   34792 type.go:168] "Request Body" body=""
	I1009 18:26:21.741515   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:21.741816   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:22.241407   34792 type.go:168] "Request Body" body=""
	I1009 18:26:22.241471   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:22.241770   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:22.741331   34792 type.go:168] "Request Body" body=""
	I1009 18:26:22.741400   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:22.741698   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:23.241258   34792 type.go:168] "Request Body" body=""
	I1009 18:26:23.241325   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:23.241638   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:23.241693   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:23.741220   34792 type.go:168] "Request Body" body=""
	I1009 18:26:23.741300   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:23.741602   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:24.241221   34792 type.go:168] "Request Body" body=""
	I1009 18:26:24.241295   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:24.241598   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:24.741133   34792 type.go:168] "Request Body" body=""
	I1009 18:26:24.741216   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:24.741539   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:25.241114   34792 type.go:168] "Request Body" body=""
	I1009 18:26:25.241213   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:25.241546   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:25.741511   34792 type.go:168] "Request Body" body=""
	I1009 18:26:25.741576   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:25.741865   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:25.741922   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:26.241516   34792 type.go:168] "Request Body" body=""
	I1009 18:26:26.241579   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:26.241882   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:26.741449   34792 type.go:168] "Request Body" body=""
	I1009 18:26:26.741511   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:26.741816   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:27.241391   34792 type.go:168] "Request Body" body=""
	I1009 18:26:27.241460   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:27.241802   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:27.741394   34792 type.go:168] "Request Body" body=""
	I1009 18:26:27.741461   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:27.741756   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:28.241317   34792 type.go:168] "Request Body" body=""
	I1009 18:26:28.241388   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:28.241721   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:28.241777   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:28.741288   34792 type.go:168] "Request Body" body=""
	I1009 18:26:28.741355   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:28.741648   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:29.241543   34792 type.go:168] "Request Body" body=""
	I1009 18:26:29.241610   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:29.241914   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:29.741477   34792 type.go:168] "Request Body" body=""
	I1009 18:26:29.741542   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:29.741838   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:30.241416   34792 type.go:168] "Request Body" body=""
	I1009 18:26:30.241476   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:30.241809   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:30.241861   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:30.741676   34792 type.go:168] "Request Body" body=""
	I1009 18:26:30.741745   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:30.742049   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:31.241791   34792 type.go:168] "Request Body" body=""
	I1009 18:26:31.241858   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:31.242183   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:31.741839   34792 type.go:168] "Request Body" body=""
	I1009 18:26:31.741908   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:31.742213   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:32.241895   34792 type.go:168] "Request Body" body=""
	I1009 18:26:32.241956   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:32.242308   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:32.242358   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:32.741973   34792 type.go:168] "Request Body" body=""
	I1009 18:26:32.742037   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:32.742358   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:33.241033   34792 type.go:168] "Request Body" body=""
	I1009 18:26:33.241095   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:33.241444   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:33.741092   34792 type.go:168] "Request Body" body=""
	I1009 18:26:33.741183   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:33.741483   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:34.241043   34792 type.go:168] "Request Body" body=""
	I1009 18:26:34.241106   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:34.241473   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:34.741040   34792 type.go:168] "Request Body" body=""
	I1009 18:26:34.741103   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:34.741434   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:34.741487   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:35.241090   34792 type.go:168] "Request Body" body=""
	I1009 18:26:35.241193   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:35.241503   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:35.741438   34792 type.go:168] "Request Body" body=""
	I1009 18:26:35.741506   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:35.741812   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:36.241366   34792 type.go:168] "Request Body" body=""
	I1009 18:26:36.241429   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:36.241735   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:36.741315   34792 type.go:168] "Request Body" body=""
	I1009 18:26:36.741379   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:36.741698   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:36.741752   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:37.241310   34792 type.go:168] "Request Body" body=""
	I1009 18:26:37.241385   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:37.241689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:37.741251   34792 type.go:168] "Request Body" body=""
	I1009 18:26:37.741329   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:37.741650   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:38.241235   34792 type.go:168] "Request Body" body=""
	I1009 18:26:38.241299   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:38.241604   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:38.741249   34792 type.go:168] "Request Body" body=""
	I1009 18:26:38.741311   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:38.741610   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:39.241542   34792 type.go:168] "Request Body" body=""
	I1009 18:26:39.241604   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:39.241903   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:39.241956   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:39.741468   34792 type.go:168] "Request Body" body=""
	I1009 18:26:39.741530   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:39.741834   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:40.241427   34792 type.go:168] "Request Body" body=""
	I1009 18:26:40.241499   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:40.241835   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:40.741723   34792 type.go:168] "Request Body" body=""
	I1009 18:26:40.741789   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:40.742120   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:41.241751   34792 type.go:168] "Request Body" body=""
	I1009 18:26:41.241818   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:41.242203   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:41.242264   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:41.741856   34792 type.go:168] "Request Body" body=""
	I1009 18:26:41.741921   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:41.742256   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:42.241895   34792 type.go:168] "Request Body" body=""
	I1009 18:26:42.241958   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:42.242315   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:42.741994   34792 type.go:168] "Request Body" body=""
	I1009 18:26:42.742065   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:42.742389   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:43.240973   34792 type.go:168] "Request Body" body=""
	I1009 18:26:43.241061   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:43.241393   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:43.740990   34792 type.go:168] "Request Body" body=""
	I1009 18:26:43.741062   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:43.741419   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:43.741468   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:44.241000   34792 type.go:168] "Request Body" body=""
	I1009 18:26:44.241064   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:44.241416   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:44.740980   34792 type.go:168] "Request Body" body=""
	I1009 18:26:44.741068   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:44.741391   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:45.241003   34792 type.go:168] "Request Body" body=""
	I1009 18:26:45.241071   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:45.241415   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:45.741236   34792 type.go:168] "Request Body" body=""
	I1009 18:26:45.741300   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:45.741605   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:45.741660   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:46.241187   34792 type.go:168] "Request Body" body=""
	I1009 18:26:46.241257   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:46.241559   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:46.741123   34792 type.go:168] "Request Body" body=""
	I1009 18:26:46.741200   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:46.741513   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:47.241090   34792 type.go:168] "Request Body" body=""
	I1009 18:26:47.241182   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:47.241488   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:47.741079   34792 type.go:168] "Request Body" body=""
	I1009 18:26:47.741166   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:47.741472   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:48.241093   34792 type.go:168] "Request Body" body=""
	I1009 18:26:48.241186   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:48.241592   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:48.241645   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:48.741196   34792 type.go:168] "Request Body" body=""
	I1009 18:26:48.741263   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:48.741567   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:49.241340   34792 type.go:168] "Request Body" body=""
	I1009 18:26:49.241413   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:49.241715   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:49.741320   34792 type.go:168] "Request Body" body=""
	I1009 18:26:49.741390   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:49.741693   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:50.241274   34792 type.go:168] "Request Body" body=""
	I1009 18:26:50.241356   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:50.241686   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:50.241739   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:50.741604   34792 type.go:168] "Request Body" body=""
	I1009 18:26:50.741672   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:50.741979   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:51.241631   34792 type.go:168] "Request Body" body=""
	I1009 18:26:51.241697   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:51.242059   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:51.741717   34792 type.go:168] "Request Body" body=""
	I1009 18:26:51.741781   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:51.742121   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:52.241772   34792 type.go:168] "Request Body" body=""
	I1009 18:26:52.241840   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:52.242193   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:52.242249   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:52.741892   34792 type.go:168] "Request Body" body=""
	I1009 18:26:52.741970   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:52.742329   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:53.241997   34792 type.go:168] "Request Body" body=""
	I1009 18:26:53.242075   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:53.242417   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:53.741024   34792 type.go:168] "Request Body" body=""
	I1009 18:26:53.741093   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:53.741440   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:54.241044   34792 type.go:168] "Request Body" body=""
	I1009 18:26:54.241125   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:54.241492   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:54.741067   34792 type.go:168] "Request Body" body=""
	I1009 18:26:54.741161   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:54.741529   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:54.741583   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:55.241129   34792 type.go:168] "Request Body" body=""
	I1009 18:26:55.241221   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:55.241609   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:55.741431   34792 type.go:168] "Request Body" body=""
	I1009 18:26:55.741496   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:55.741812   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:56.241424   34792 type.go:168] "Request Body" body=""
	I1009 18:26:56.241490   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:56.241796   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:56.741393   34792 type.go:168] "Request Body" body=""
	I1009 18:26:56.741462   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:56.741773   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:56.741826   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:57.241378   34792 type.go:168] "Request Body" body=""
	I1009 18:26:57.241453   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:57.241771   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:57.741379   34792 type.go:168] "Request Body" body=""
	I1009 18:26:57.741447   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:57.741762   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:58.241330   34792 type.go:168] "Request Body" body=""
	I1009 18:26:58.241413   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:58.241723   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:58.741322   34792 type.go:168] "Request Body" body=""
	I1009 18:26:58.741396   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:58.741713   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:59.241600   34792 type.go:168] "Request Body" body=""
	I1009 18:26:59.241669   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:59.241990   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:59.242043   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:59.741668   34792 type.go:168] "Request Body" body=""
	I1009 18:26:59.741732   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:59.742052   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:00.241717   34792 type.go:168] "Request Body" body=""
	I1009 18:27:00.241783   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:00.242095   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:00.741931   34792 type.go:168] "Request Body" body=""
	I1009 18:27:00.742008   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:00.742337   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:01.242007   34792 type.go:168] "Request Body" body=""
	I1009 18:27:01.242099   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:01.242479   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:01.242534   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:01.741056   34792 type.go:168] "Request Body" body=""
	I1009 18:27:01.741158   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:01.741495   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:02.241218   34792 type.go:168] "Request Body" body=""
	I1009 18:27:02.241281   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:02.241609   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:02.741259   34792 type.go:168] "Request Body" body=""
	I1009 18:27:02.741340   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:02.741682   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:03.241295   34792 type.go:168] "Request Body" body=""
	I1009 18:27:03.241359   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:03.241698   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:03.741242   34792 type.go:168] "Request Body" body=""
	I1009 18:27:03.741308   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:03.741628   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:03.741679   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:04.241208   34792 type.go:168] "Request Body" body=""
	I1009 18:27:04.241270   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:04.241627   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:04.741229   34792 type.go:168] "Request Body" body=""
	I1009 18:27:04.741287   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:04.741583   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:05.241255   34792 type.go:168] "Request Body" body=""
	I1009 18:27:05.241340   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:05.241742   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:05.741635   34792 type.go:168] "Request Body" body=""
	I1009 18:27:05.741703   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:05.742066   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:05.742130   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:06.241658   34792 type.go:168] "Request Body" body=""
	I1009 18:27:06.241731   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:06.242079   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:06.741854   34792 type.go:168] "Request Body" body=""
	I1009 18:27:06.741922   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:06.742243   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:07.241927   34792 type.go:168] "Request Body" body=""
	I1009 18:27:07.241997   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:07.242459   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:07.741045   34792 type.go:168] "Request Body" body=""
	I1009 18:27:07.741126   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:07.741466   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:08.241033   34792 type.go:168] "Request Body" body=""
	I1009 18:27:08.241100   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:08.241458   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:08.241511   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:08.741034   34792 type.go:168] "Request Body" body=""
	I1009 18:27:08.741096   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:08.741406   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:09.241378   34792 type.go:168] "Request Body" body=""
	I1009 18:27:09.241439   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:09.241764   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:09.741349   34792 type.go:168] "Request Body" body=""
	I1009 18:27:09.741417   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:09.741711   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:10.241285   34792 type.go:168] "Request Body" body=""
	I1009 18:27:10.241365   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:10.241692   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:10.241753   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:10.741690   34792 type.go:168] "Request Body" body=""
	I1009 18:27:10.741757   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:10.742128   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:11.241848   34792 type.go:168] "Request Body" body=""
	I1009 18:27:11.241913   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:11.242250   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:11.741958   34792 type.go:168] "Request Body" body=""
	I1009 18:27:11.742022   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:11.742364   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:12.240970   34792 type.go:168] "Request Body" body=""
	I1009 18:27:12.241079   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:12.241437   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:12.741083   34792 type.go:168] "Request Body" body=""
	I1009 18:27:12.741169   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:12.741518   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:12.741570   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:13.241130   34792 type.go:168] "Request Body" body=""
	I1009 18:27:13.241246   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:13.241579   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:13.741161   34792 type.go:168] "Request Body" body=""
	I1009 18:27:13.741231   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:13.741554   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:14.241185   34792 type.go:168] "Request Body" body=""
	I1009 18:27:14.241247   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:14.241557   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:14.741128   34792 type.go:168] "Request Body" body=""
	I1009 18:27:14.741223   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:14.741560   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:14.741616   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:15.241160   34792 type.go:168] "Request Body" body=""
	I1009 18:27:15.241231   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:15.241537   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:15.741362   34792 type.go:168] "Request Body" body=""
	I1009 18:27:15.741426   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:15.741731   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:16.241332   34792 type.go:168] "Request Body" body=""
	I1009 18:27:16.241395   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:16.241711   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:16.741290   34792 type.go:168] "Request Body" body=""
	I1009 18:27:16.741362   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:16.741691   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:16.741746   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:17.241296   34792 type.go:168] "Request Body" body=""
	I1009 18:27:17.241365   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:17.241677   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:17.741260   34792 type.go:168] "Request Body" body=""
	I1009 18:27:17.741330   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:17.741645   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:18.241233   34792 type.go:168] "Request Body" body=""
	I1009 18:27:18.241315   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:18.241649   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:18.741254   34792 type.go:168] "Request Body" body=""
	I1009 18:27:18.741327   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:18.741641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:19.241576   34792 type.go:168] "Request Body" body=""
	I1009 18:27:19.241642   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:19.241965   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:19.242017   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:19.741671   34792 type.go:168] "Request Body" body=""
	I1009 18:27:19.741744   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:19.742057   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:20.241721   34792 type.go:168] "Request Body" body=""
	I1009 18:27:20.241782   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:20.242076   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:20.742009   34792 type.go:168] "Request Body" body=""
	I1009 18:27:20.742090   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:20.742453   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:21.241057   34792 type.go:168] "Request Body" body=""
	I1009 18:27:21.241122   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:21.241467   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:21.741089   34792 type.go:168] "Request Body" body=""
	I1009 18:27:21.741181   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:21.741490   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:21.741542   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:22.241108   34792 type.go:168] "Request Body" body=""
	I1009 18:27:22.241209   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:22.241541   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:22.741234   34792 type.go:168] "Request Body" body=""
	I1009 18:27:22.741302   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:22.741654   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:23.241319   34792 type.go:168] "Request Body" body=""
	I1009 18:27:23.241387   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:23.241701   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:23.741234   34792 type.go:168] "Request Body" body=""
	I1009 18:27:23.741296   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:23.741605   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:23.741658   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:24.241213   34792 type.go:168] "Request Body" body=""
	I1009 18:27:24.241289   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:24.241598   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:24.741228   34792 type.go:168] "Request Body" body=""
	I1009 18:27:24.741292   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:24.741613   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:25.241253   34792 type.go:168] "Request Body" body=""
	I1009 18:27:25.241322   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:25.241625   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:25.741545   34792 type.go:168] "Request Body" body=""
	I1009 18:27:25.741614   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:25.741927   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:25.742024   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:26.241505   34792 type.go:168] "Request Body" body=""
	I1009 18:27:26.241567   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:26.241878   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:26.741454   34792 type.go:168] "Request Body" body=""
	I1009 18:27:26.741518   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:26.741875   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:27.241441   34792 type.go:168] "Request Body" body=""
	I1009 18:27:27.241506   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:27.241818   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:27.741400   34792 type.go:168] "Request Body" body=""
	I1009 18:27:27.741470   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:27.741797   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:28.241401   34792 type.go:168] "Request Body" body=""
	I1009 18:27:28.241474   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:28.241808   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:28.241862   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:28.741402   34792 type.go:168] "Request Body" body=""
	I1009 18:27:28.741472   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:28.741806   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:29.241748   34792 type.go:168] "Request Body" body=""
	I1009 18:27:29.241819   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:29.242161   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:29.741821   34792 type.go:168] "Request Body" body=""
	I1009 18:27:29.741885   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:29.742231   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:30.241904   34792 type.go:168] "Request Body" body=""
	I1009 18:27:30.241974   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:30.242318   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:30.242382   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:30.741035   34792 type.go:168] "Request Body" body=""
	I1009 18:27:30.741108   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:30.741409   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:31.241068   34792 type.go:168] "Request Body" body=""
	I1009 18:27:31.241132   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:31.241479   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:31.741086   34792 type.go:168] "Request Body" body=""
	I1009 18:27:31.741176   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:31.741471   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:32.241219   34792 type.go:168] "Request Body" body=""
	I1009 18:27:32.241295   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:32.241610   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:32.741219   34792 type.go:168] "Request Body" body=""
	I1009 18:27:32.741298   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:32.741606   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:32.741661   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:33.241210   34792 type.go:168] "Request Body" body=""
	I1009 18:27:33.241276   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:33.241588   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:33.741182   34792 type.go:168] "Request Body" body=""
	I1009 18:27:33.741248   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:33.741547   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:34.241192   34792 type.go:168] "Request Body" body=""
	I1009 18:27:34.241262   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:34.241590   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:34.741212   34792 type.go:168] "Request Body" body=""
	I1009 18:27:34.741284   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:34.741609   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:35.241253   34792 type.go:168] "Request Body" body=""
	I1009 18:27:35.241323   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:35.241649   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:35.241703   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:35.741567   34792 type.go:168] "Request Body" body=""
	I1009 18:27:35.741632   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:35.741973   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:36.241654   34792 type.go:168] "Request Body" body=""
	I1009 18:27:36.241728   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:36.242025   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:36.741778   34792 type.go:168] "Request Body" body=""
	I1009 18:27:36.741844   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:36.742212   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:37.241852   34792 type.go:168] "Request Body" body=""
	I1009 18:27:37.241925   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:37.242276   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:37.242330   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:37.741978   34792 type.go:168] "Request Body" body=""
	I1009 18:27:37.742052   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:37.742377   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:38.240952   34792 type.go:168] "Request Body" body=""
	I1009 18:27:38.241027   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:38.241428   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:38.741115   34792 type.go:168] "Request Body" body=""
	I1009 18:27:38.741222   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:38.741569   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:39.241464   34792 type.go:168] "Request Body" body=""
	I1009 18:27:39.241531   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:39.241853   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:39.741475   34792 type.go:168] "Request Body" body=""
	I1009 18:27:39.741552   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:39.741888   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:39.741940   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:40.241482   34792 type.go:168] "Request Body" body=""
	I1009 18:27:40.241546   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:40.241865   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:40.741822   34792 type.go:168] "Request Body" body=""
	I1009 18:27:40.741912   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:40.742310   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:41.241924   34792 type.go:168] "Request Body" body=""
	I1009 18:27:41.241992   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:41.242352   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:41.742037   34792 type.go:168] "Request Body" body=""
	I1009 18:27:41.742123   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:41.742467   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:41.742533   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:42.241062   34792 type.go:168] "Request Body" body=""
	I1009 18:27:42.241131   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:42.241483   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:42.741199   34792 type.go:168] "Request Body" body=""
	I1009 18:27:42.741261   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:42.741576   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:43.241209   34792 type.go:168] "Request Body" body=""
	I1009 18:27:43.241285   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:43.241620   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:43.741257   34792 type.go:168] "Request Body" body=""
	I1009 18:27:43.741321   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:43.741675   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:44.241258   34792 type.go:168] "Request Body" body=""
	I1009 18:27:44.241325   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:44.241630   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:44.241684   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:44.741229   34792 type.go:168] "Request Body" body=""
	I1009 18:27:44.741292   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:44.741621   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:45.241009   34792 type.go:168] "Request Body" body=""
	I1009 18:27:45.241089   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:45.241464   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:45.741255   34792 type.go:168] "Request Body" body=""
	I1009 18:27:45.741321   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:45.741658   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:46.241261   34792 type.go:168] "Request Body" body=""
	I1009 18:27:46.241333   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:46.241687   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:46.241736   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:46.741271   34792 type.go:168] "Request Body" body=""
	I1009 18:27:46.741338   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:46.741695   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:47.241266   34792 type.go:168] "Request Body" body=""
	I1009 18:27:47.241341   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:47.241666   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:47.741243   34792 type.go:168] "Request Body" body=""
	I1009 18:27:47.741310   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:47.741653   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:48.241251   34792 type.go:168] "Request Body" body=""
	I1009 18:27:48.241342   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:48.241651   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:48.741262   34792 type.go:168] "Request Body" body=""
	I1009 18:27:48.741328   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:48.741647   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:48.741699   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:49.241692   34792 type.go:168] "Request Body" body=""
	I1009 18:27:49.241772   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:49.242116   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:49.741779   34792 type.go:168] "Request Body" body=""
	I1009 18:27:49.741846   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:49.742256   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:50.241914   34792 type.go:168] "Request Body" body=""
	I1009 18:27:50.241978   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:50.242357   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:50.741207   34792 type.go:168] "Request Body" body=""
	I1009 18:27:50.741284   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:50.741645   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:51.241236   34792 type.go:168] "Request Body" body=""
	I1009 18:27:51.241313   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:51.241642   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:51.241696   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:51.741256   34792 type.go:168] "Request Body" body=""
	I1009 18:27:51.741385   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:51.741740   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:52.241321   34792 type.go:168] "Request Body" body=""
	I1009 18:27:52.241392   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:52.241724   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:52.741315   34792 type.go:168] "Request Body" body=""
	I1009 18:27:52.741382   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:52.741729   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:53.241330   34792 type.go:168] "Request Body" body=""
	I1009 18:27:53.241398   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:53.241736   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:53.241797   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:53.741402   34792 type.go:168] "Request Body" body=""
	I1009 18:27:53.741465   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:53.741821   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:54.241418   34792 type.go:168] "Request Body" body=""
	I1009 18:27:54.241482   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:54.241803   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:54.741399   34792 type.go:168] "Request Body" body=""
	I1009 18:27:54.741462   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:54.741794   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:55.241395   34792 type.go:168] "Request Body" body=""
	I1009 18:27:55.241460   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:55.241801   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:55.241851   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:55.741689   34792 type.go:168] "Request Body" body=""
	I1009 18:27:55.741763   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:55.742091   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:56.241733   34792 type.go:168] "Request Body" body=""
	I1009 18:27:56.241801   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:56.242128   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:56.741823   34792 type.go:168] "Request Body" body=""
	I1009 18:27:56.741896   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:56.742277   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:57.241950   34792 type.go:168] "Request Body" body=""
	I1009 18:27:57.242025   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:57.242395   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:57.242451   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:57.741025   34792 type.go:168] "Request Body" body=""
	I1009 18:27:57.741093   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:57.741454   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:58.241127   34792 type.go:168] "Request Body" body=""
	I1009 18:27:58.241225   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:58.241560   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:58.741208   34792 type.go:168] "Request Body" body=""
	I1009 18:27:58.741281   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:58.741640   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:59.241113   34792 node_ready.go:38] duration metric: took 6m0.000256287s for node "functional-753440" to be "Ready" ...
	I1009 18:27:59.244464   34792 out.go:203] 
	W1009 18:27:59.246567   34792 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 18:27:59.246590   34792 out.go:285] * 
	W1009 18:27:59.248293   34792 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:27:59.250105   34792 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 18:27:58 functional-753440 crio[2938]: time="2025-10-09T18:27:58.569212562Z" level=info msg="createCtr: removing container 44cc920dbd6720b1f12608fd0a870e869fd6904251296b8ad12e2b688c1490f2" id=0680e5b7-4641-42be-bfb6-dfa9e93a4d4b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:27:58 functional-753440 crio[2938]: time="2025-10-09T18:27:58.569243649Z" level=info msg="createCtr: deleting container 44cc920dbd6720b1f12608fd0a870e869fd6904251296b8ad12e2b688c1490f2 from storage" id=0680e5b7-4641-42be-bfb6-dfa9e93a4d4b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:27:58 functional-753440 crio[2938]: time="2025-10-09T18:27:58.571368081Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-753440_kube-system_ddd5b817e547272bbbe5e6f0c16b8e98_0" id=0680e5b7-4641-42be-bfb6-dfa9e93a4d4b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:00 functional-753440 crio[2938]: time="2025-10-09T18:28:00.542891227Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=1f1258bf-421d-4688-b323-1fa5c359ad07 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:28:00 functional-753440 crio[2938]: time="2025-10-09T18:28:00.543963575Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=d89f14c8-0567-4ffc-93dc-1010587b7efb name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:28:00 functional-753440 crio[2938]: time="2025-10-09T18:28:00.545202698Z" level=info msg="Creating container: kube-system/etcd-functional-753440/etcd" id=2a6ae148-b613-4860-bdc1-e184df617eb6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:00 functional-753440 crio[2938]: time="2025-10-09T18:28:00.545739676Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:28:00 functional-753440 crio[2938]: time="2025-10-09T18:28:00.551198324Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:28:00 functional-753440 crio[2938]: time="2025-10-09T18:28:00.5516209Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:28:00 functional-753440 crio[2938]: time="2025-10-09T18:28:00.573070804Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=2a6ae148-b613-4860-bdc1-e184df617eb6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:00 functional-753440 crio[2938]: time="2025-10-09T18:28:00.574711439Z" level=info msg="createCtr: deleting container ID 31d7052f51448ab4cb31450be8c20e284409f85b31edc43d374b6e4c387c6694 from idIndex" id=2a6ae148-b613-4860-bdc1-e184df617eb6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:00 functional-753440 crio[2938]: time="2025-10-09T18:28:00.574748009Z" level=info msg="createCtr: removing container 31d7052f51448ab4cb31450be8c20e284409f85b31edc43d374b6e4c387c6694" id=2a6ae148-b613-4860-bdc1-e184df617eb6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:00 functional-753440 crio[2938]: time="2025-10-09T18:28:00.574780064Z" level=info msg="createCtr: deleting container 31d7052f51448ab4cb31450be8c20e284409f85b31edc43d374b6e4c387c6694 from storage" id=2a6ae148-b613-4860-bdc1-e184df617eb6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:00 functional-753440 crio[2938]: time="2025-10-09T18:28:00.576778871Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-753440_kube-system_894f77eb6f96f2cc2bf4bdca611e7cdb_0" id=2a6ae148-b613-4860-bdc1-e184df617eb6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:01 functional-753440 crio[2938]: time="2025-10-09T18:28:01.543698033Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=a800cd49-f7f1-4447-98c2-e09f52d404b3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:28:01 functional-753440 crio[2938]: time="2025-10-09T18:28:01.544707Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=2fe0d86f-ff52-4d98-951c-fdb64ef3d6af name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:28:01 functional-753440 crio[2938]: time="2025-10-09T18:28:01.545655472Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-753440/kube-scheduler" id=8001c97d-7d9e-4a4e-98c3-5daa84c61e69 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:01 functional-753440 crio[2938]: time="2025-10-09T18:28:01.545917788Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:28:01 functional-753440 crio[2938]: time="2025-10-09T18:28:01.549542455Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:28:01 functional-753440 crio[2938]: time="2025-10-09T18:28:01.550164181Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:28:01 functional-753440 crio[2938]: time="2025-10-09T18:28:01.569177075Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8001c97d-7d9e-4a4e-98c3-5daa84c61e69 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:01 functional-753440 crio[2938]: time="2025-10-09T18:28:01.571524685Z" level=info msg="createCtr: deleting container ID 17235682c69dd35258724bb5e5642cb1ba20aba2591b34e185cd460dcf086ed3 from idIndex" id=8001c97d-7d9e-4a4e-98c3-5daa84c61e69 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:01 functional-753440 crio[2938]: time="2025-10-09T18:28:01.571580833Z" level=info msg="createCtr: removing container 17235682c69dd35258724bb5e5642cb1ba20aba2591b34e185cd460dcf086ed3" id=8001c97d-7d9e-4a4e-98c3-5daa84c61e69 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:01 functional-753440 crio[2938]: time="2025-10-09T18:28:01.571625977Z" level=info msg="createCtr: deleting container 17235682c69dd35258724bb5e5642cb1ba20aba2591b34e185cd460dcf086ed3 from storage" id=8001c97d-7d9e-4a4e-98c3-5daa84c61e69 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:01 functional-753440 crio[2938]: time="2025-10-09T18:28:01.576080032Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-753440_kube-system_c3332277da3037b9d30e61510b9fdccb_0" id=8001c97d-7d9e-4a4e-98c3-5daa84c61e69 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:28:03.272567    4482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:28:03.273093    4482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:28:03.274704    4482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:28:03.275202    4482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:28:03.276749    4482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:28:03 up  1:10,  0 user,  load average: 0.00, 0.07, 0.09
	Linux functional-753440 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 18:27:58 functional-753440 kubelet[1796]:  > podSandboxID="a0f669ac9226ee4ac7b841aacfe05ece4235d10b02fe7bb351eab32cadb9e24d"
	Oct 09 18:27:58 functional-753440 kubelet[1796]: E1009 18:27:58.571796    1796 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:27:58 functional-753440 kubelet[1796]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-753440_kube-system(ddd5b817e547272bbbe5e6f0c16b8e98): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:27:58 functional-753440 kubelet[1796]:  > logger="UnhandledError"
	Oct 09 18:27:58 functional-753440 kubelet[1796]: E1009 18:27:58.571834    1796 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-753440" podUID="ddd5b817e547272bbbe5e6f0c16b8e98"
	Oct 09 18:28:00 functional-753440 kubelet[1796]: E1009 18:28:00.542411    1796 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753440\" not found" node="functional-753440"
	Oct 09 18:28:00 functional-753440 kubelet[1796]: E1009 18:28:00.577097    1796 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:28:00 functional-753440 kubelet[1796]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:28:00 functional-753440 kubelet[1796]:  > podSandboxID="b2bb9a720dde4343bb6d68e21981701423cf9ba8fc536a4b16c3a5d7282c9e5b"
	Oct 09 18:28:00 functional-753440 kubelet[1796]: E1009 18:28:00.577210    1796 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:28:00 functional-753440 kubelet[1796]:         container etcd start failed in pod etcd-functional-753440_kube-system(894f77eb6f96f2cc2bf4bdca611e7cdb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:28:00 functional-753440 kubelet[1796]:  > logger="UnhandledError"
	Oct 09 18:28:00 functional-753440 kubelet[1796]: E1009 18:28:00.577254    1796 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-753440" podUID="894f77eb6f96f2cc2bf4bdca611e7cdb"
	Oct 09 18:28:01 functional-753440 kubelet[1796]: E1009 18:28:01.227395    1796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-753440?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 09 18:28:01 functional-753440 kubelet[1796]: E1009 18:28:01.367188    1796 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 09 18:28:01 functional-753440 kubelet[1796]: I1009 18:28:01.426685    1796 kubelet_node_status.go:75] "Attempting to register node" node="functional-753440"
	Oct 09 18:28:01 functional-753440 kubelet[1796]: E1009 18:28:01.427087    1796 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-753440"
	Oct 09 18:28:01 functional-753440 kubelet[1796]: E1009 18:28:01.543161    1796 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753440\" not found" node="functional-753440"
	Oct 09 18:28:01 functional-753440 kubelet[1796]: E1009 18:28:01.576490    1796 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:28:01 functional-753440 kubelet[1796]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:28:01 functional-753440 kubelet[1796]:  > podSandboxID="a1601c351acb2109bc843118525e18f9874347bc3c77d062c9da98c9f01ca0c9"
	Oct 09 18:28:01 functional-753440 kubelet[1796]: E1009 18:28:01.576623    1796 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:28:01 functional-753440 kubelet[1796]:         container kube-scheduler start failed in pod kube-scheduler-functional-753440_kube-system(c3332277da3037b9d30e61510b9fdccb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:28:01 functional-753440 kubelet[1796]:  > logger="UnhandledError"
	Oct 09 18:28:01 functional-753440 kubelet[1796]: E1009 18:28:01.576670    1796 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-753440" podUID="c3332277da3037b9d30e61510b9fdccb"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753440 -n functional-753440
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753440 -n functional-753440: exit status 2 (318.14924ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753440" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (2.19s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.19s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 kubectl -- --context functional-753440 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753440 kubectl -- --context functional-753440 get pods: exit status 1 (96.699431ms)

                                                
                                                
** stderr ** 
	E1009 18:28:09.806737   40211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:28:09.807066   40211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:28:09.808524   40211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:28:09.808811   40211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:28:09.810161   40211 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-amd64 -p functional-753440 kubectl -- --context functional-753440 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753440
helpers_test.go:243: (dbg) docker inspect functional-753440:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205",
	        "Created": "2025-10-09T18:13:38.612842612Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 29511,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:13:38.64668907Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/hostname",
	        "HostsPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/hosts",
	        "LogPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205-json.log",
	        "Name": "/functional-753440",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753440:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-753440",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205",
	                "LowerDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753440",
	                "Source": "/var/lib/docker/volumes/functional-753440/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753440",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753440",
	                "name.minikube.sigs.k8s.io": "functional-753440",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d81e656cb7fd298b6be7b84ddafb7e6d0b2df1b9904e1c444b24eb780385409d",
	            "SandboxKey": "/var/run/docker/netns/d81e656cb7fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753440": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:52:a9:f3:ce:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d69cee380b2506f35d197ee18a95b90b110e191b547e1220873c5484ffc92ad3",
	                    "EndpointID": "2f780bc31b7359d4036c8b32e09c7f7657923ca8c46e8392506706282465c3ec",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753440",
	                        "694bf539948e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-753440 -n functional-753440
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-753440 -n functional-753440: exit status 2 (307.657199ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 logs -n 25
helpers_test.go:260: TestFunctional/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ nospam-663194 --log_dir /tmp/nospam-663194 pause                                                              │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ unpause │ nospam-663194 --log_dir /tmp/nospam-663194 unpause                                                            │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ unpause │ nospam-663194 --log_dir /tmp/nospam-663194 unpause                                                            │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ unpause │ nospam-663194 --log_dir /tmp/nospam-663194 unpause                                                            │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ stop    │ nospam-663194 --log_dir /tmp/nospam-663194 stop                                                               │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ stop    │ nospam-663194 --log_dir /tmp/nospam-663194 stop                                                               │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ stop    │ nospam-663194 --log_dir /tmp/nospam-663194 stop                                                               │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ delete  │ -p nospam-663194                                                                                              │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ start   │ -p functional-753440 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │                     │
	│ start   │ -p functional-753440 --alsologtostderr -v=8                                                                   │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:21 UTC │                     │
	│ cache   │ functional-753440 cache add registry.k8s.io/pause:3.1                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ functional-753440 cache add registry.k8s.io/pause:3.3                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ functional-753440 cache add registry.k8s.io/pause:latest                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ functional-753440 cache add minikube-local-cache-test:functional-753440                                       │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ functional-753440 cache delete minikube-local-cache-test:functional-753440                                    │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ ssh     │ functional-753440 ssh sudo crictl images                                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ ssh     │ functional-753440 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ ssh     │ functional-753440 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │                     │
	│ cache   │ functional-753440 cache reload                                                                                │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ ssh     │ functional-753440 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ kubectl │ functional-753440 kubectl -- --context functional-753440 get pods                                             │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:21:55
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:21:55.407242   34792 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:21:55.407482   34792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:21:55.407490   34792 out.go:374] Setting ErrFile to fd 2...
	I1009 18:21:55.407494   34792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:21:55.407669   34792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:21:55.408109   34792 out.go:368] Setting JSON to false
	I1009 18:21:55.408948   34792 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3863,"bootTime":1760030252,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:21:55.409029   34792 start.go:141] virtualization: kvm guest
	I1009 18:21:55.411208   34792 out.go:179] * [functional-753440] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:21:55.412706   34792 notify.go:220] Checking for updates...
	I1009 18:21:55.412728   34792 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:21:55.414107   34792 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:21:55.415609   34792 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:21:55.417005   34792 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:21:55.418411   34792 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:21:55.419884   34792 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:21:55.421538   34792 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:21:55.421658   34792 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:21:55.445068   34792 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:21:55.445204   34792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:21:55.504624   34792 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:21:55.494450296 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:21:55.504746   34792 docker.go:318] overlay module found
	I1009 18:21:55.507261   34792 out.go:179] * Using the docker driver based on existing profile
	I1009 18:21:55.508504   34792 start.go:305] selected driver: docker
	I1009 18:21:55.508518   34792 start.go:925] validating driver "docker" against &{Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:21:55.508594   34792 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:21:55.508665   34792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:21:55.566793   34792 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:21:55.557358643 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:21:55.567631   34792 cni.go:84] Creating CNI manager for ""
	I1009 18:21:55.567714   34792 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:21:55.567780   34792 start.go:349] cluster config:
	{Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:21:55.569913   34792 out.go:179] * Starting "functional-753440" primary control-plane node in "functional-753440" cluster
	I1009 18:21:55.571250   34792 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:21:55.572672   34792 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:21:55.573890   34792 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:21:55.573921   34792 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:21:55.573933   34792 cache.go:64] Caching tarball of preloaded images
	I1009 18:21:55.573992   34792 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:21:55.574016   34792 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:21:55.574025   34792 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:21:55.574109   34792 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/config.json ...
	I1009 18:21:55.593603   34792 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 18:21:55.593631   34792 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 18:21:55.593646   34792 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:21:55.593672   34792 start.go:360] acquireMachinesLock for functional-753440: {Name:mka6dd10318522f9d68a16550e4b04812fa22004 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:21:55.593732   34792 start.go:364] duration metric: took 38.489µs to acquireMachinesLock for "functional-753440"
	I1009 18:21:55.593749   34792 start.go:96] Skipping create...Using existing machine configuration
	I1009 18:21:55.593758   34792 fix.go:54] fixHost starting: 
	I1009 18:21:55.593970   34792 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
	I1009 18:21:55.610925   34792 fix.go:112] recreateIfNeeded on functional-753440: state=Running err=<nil>
	W1009 18:21:55.610951   34792 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 18:21:55.612681   34792 out.go:252] * Updating the running docker "functional-753440" container ...
	I1009 18:21:55.612704   34792 machine.go:93] provisionDockerMachine start ...
	I1009 18:21:55.612764   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:55.630174   34792 main.go:141] libmachine: Using SSH client type: native
	I1009 18:21:55.630389   34792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:21:55.630401   34792 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:21:55.773949   34792 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753440
	
	I1009 18:21:55.773975   34792 ubuntu.go:182] provisioning hostname "functional-753440"
	I1009 18:21:55.774031   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:55.792726   34792 main.go:141] libmachine: Using SSH client type: native
	I1009 18:21:55.792949   34792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:21:55.792962   34792 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-753440 && echo "functional-753440" | sudo tee /etc/hostname
	I1009 18:21:55.945969   34792 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753440
	
	I1009 18:21:55.946040   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:55.963600   34792 main.go:141] libmachine: Using SSH client type: native
	I1009 18:21:55.963821   34792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:21:55.963839   34792 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753440' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753440/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753440' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:21:56.108677   34792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:21:56.108700   34792 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 18:21:56.108717   34792 ubuntu.go:190] setting up certificates
	I1009 18:21:56.108727   34792 provision.go:84] configureAuth start
	I1009 18:21:56.108783   34792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753440
	I1009 18:21:56.127107   34792 provision.go:143] copyHostCerts
	I1009 18:21:56.127166   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:21:56.127197   34792 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 18:21:56.127212   34792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:21:56.127290   34792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 18:21:56.127394   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:21:56.127416   34792 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 18:21:56.127420   34792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:21:56.127449   34792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 18:21:56.127507   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:21:56.127523   34792 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 18:21:56.127526   34792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:21:56.127549   34792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 18:21:56.127598   34792 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.functional-753440 san=[127.0.0.1 192.168.49.2 functional-753440 localhost minikube]
	I1009 18:21:56.380428   34792 provision.go:177] copyRemoteCerts
	I1009 18:21:56.380482   34792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:21:56.380515   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:56.398054   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:56.500395   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 18:21:56.500448   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 18:21:56.517603   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 18:21:56.517655   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 18:21:56.534349   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 18:21:56.534397   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 18:21:56.551305   34792 provision.go:87] duration metric: took 442.551304ms to configureAuth
	I1009 18:21:56.551330   34792 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:21:56.551498   34792 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:21:56.551579   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:56.568651   34792 main.go:141] libmachine: Using SSH client type: native
	I1009 18:21:56.568866   34792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:21:56.568881   34792 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:21:56.838390   34792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:21:56.838414   34792 machine.go:96] duration metric: took 1.225703269s to provisionDockerMachine
	I1009 18:21:56.838426   34792 start.go:293] postStartSetup for "functional-753440" (driver="docker")
	I1009 18:21:56.838437   34792 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:21:56.838510   34792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:21:56.838559   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:56.856450   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:56.959658   34792 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:21:56.963119   34792 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1009 18:21:56.963150   34792 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1009 18:21:56.963158   34792 command_runner.go:130] > VERSION_ID="12"
	I1009 18:21:56.963165   34792 command_runner.go:130] > VERSION="12 (bookworm)"
	I1009 18:21:56.963174   34792 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1009 18:21:56.963179   34792 command_runner.go:130] > ID=debian
	I1009 18:21:56.963186   34792 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1009 18:21:56.963194   34792 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1009 18:21:56.963212   34792 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1009 18:21:56.963315   34792 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:21:56.963334   34792 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:21:56.963342   34792 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 18:21:56.963382   34792 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 18:21:56.963448   34792 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 18:21:56.963463   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /etc/ssl/certs/148802.pem
	I1009 18:21:56.963529   34792 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/test/nested/copy/14880/hosts -> hosts in /etc/test/nested/copy/14880
	I1009 18:21:56.963535   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/test/nested/copy/14880/hosts -> /etc/test/nested/copy/14880/hosts
	I1009 18:21:56.963565   34792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/14880
	I1009 18:21:56.970888   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:21:56.988730   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/test/nested/copy/14880/hosts --> /etc/test/nested/copy/14880/hosts (40 bytes)
	I1009 18:21:57.005907   34792 start.go:296] duration metric: took 167.469505ms for postStartSetup
	I1009 18:21:57.005971   34792 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:21:57.006025   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:57.023806   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:57.123166   34792 command_runner.go:130] > 39%
	I1009 18:21:57.123235   34792 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:21:57.127917   34792 command_runner.go:130] > 179G
	I1009 18:21:57.127948   34792 fix.go:56] duration metric: took 1.534189396s for fixHost
	I1009 18:21:57.127960   34792 start.go:83] releasing machines lock for "functional-753440", held for 1.534218366s
	I1009 18:21:57.128034   34792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753440
	I1009 18:21:57.145978   34792 ssh_runner.go:195] Run: cat /version.json
	I1009 18:21:57.146019   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:57.146063   34792 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:21:57.146159   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:57.164302   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:57.164547   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:57.263542   34792 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759745255-21703", "minikube_version": "v1.37.0", "commit": "a51fe4b7ffc88febd8814e8831f38772e976d097"}
	I1009 18:21:57.263690   34792 ssh_runner.go:195] Run: systemctl --version
	I1009 18:21:57.316955   34792 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1009 18:21:57.317002   34792 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1009 18:21:57.317022   34792 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1009 18:21:57.317074   34792 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:21:57.353021   34792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 18:21:57.357737   34792 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1009 18:21:57.357788   34792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:21:57.357834   34792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:21:57.365811   34792 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 18:21:57.365833   34792 start.go:495] detecting cgroup driver to use...
	I1009 18:21:57.365861   34792 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:21:57.365903   34792 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:21:57.380237   34792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:21:57.392796   34792 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:21:57.392859   34792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:21:57.407315   34792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:21:57.419892   34792 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:21:57.506572   34792 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:21:57.589596   34792 docker.go:234] disabling docker service ...
	I1009 18:21:57.589673   34792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:21:57.603725   34792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:21:57.615780   34792 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:21:57.696218   34792 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:21:57.781915   34792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:21:57.794534   34792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:21:57.808497   34792 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1009 18:21:57.808534   34792 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:21:57.808589   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.817764   34792 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 18:21:57.817814   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.827115   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.836066   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.844563   34792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:21:57.852458   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.861227   34792 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.869900   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.878917   34792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:21:57.886570   34792 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1009 18:21:57.886644   34792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:21:57.894517   34792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:21:57.979064   34792 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:21:58.090717   34792 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:21:58.090783   34792 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:21:58.095044   34792 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1009 18:21:58.095068   34792 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1009 18:21:58.095074   34792 command_runner.go:130] > Device: 0,59	Inode: 3803        Links: 1
	I1009 18:21:58.095080   34792 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 18:21:58.095085   34792 command_runner.go:130] > Access: 2025-10-09 18:21:58.072690390 +0000
	I1009 18:21:58.095093   34792 command_runner.go:130] > Modify: 2025-10-09 18:21:58.072690390 +0000
	I1009 18:21:58.095101   34792 command_runner.go:130] > Change: 2025-10-09 18:21:58.072690390 +0000
	I1009 18:21:58.095108   34792 command_runner.go:130] >  Birth: 2025-10-09 18:21:58.072690390 +0000
	I1009 18:21:58.095130   34792 start.go:563] Will wait 60s for crictl version
	I1009 18:21:58.095214   34792 ssh_runner.go:195] Run: which crictl
	I1009 18:21:58.099101   34792 command_runner.go:130] > /usr/local/bin/crictl
	I1009 18:21:58.099187   34792 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:21:58.122816   34792 command_runner.go:130] > Version:  0.1.0
	I1009 18:21:58.122840   34792 command_runner.go:130] > RuntimeName:  cri-o
	I1009 18:21:58.122845   34792 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1009 18:21:58.122850   34792 command_runner.go:130] > RuntimeApiVersion:  v1
	I1009 18:21:58.122867   34792 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:21:58.122920   34792 ssh_runner.go:195] Run: crio --version
	I1009 18:21:58.149899   34792 command_runner.go:130] > crio version 1.34.1
	I1009 18:21:58.149922   34792 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1009 18:21:58.149928   34792 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1009 18:21:58.149933   34792 command_runner.go:130] >    GitTreeState:   dirty
	I1009 18:21:58.149944   34792 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1009 18:21:58.149949   34792 command_runner.go:130] >    GoVersion:      go1.24.6
	I1009 18:21:58.149952   34792 command_runner.go:130] >    Compiler:       gc
	I1009 18:21:58.149957   34792 command_runner.go:130] >    Platform:       linux/amd64
	I1009 18:21:58.149961   34792 command_runner.go:130] >    Linkmode:       static
	I1009 18:21:58.149964   34792 command_runner.go:130] >    BuildTags:
	I1009 18:21:58.149967   34792 command_runner.go:130] >      static
	I1009 18:21:58.149971   34792 command_runner.go:130] >      netgo
	I1009 18:21:58.149975   34792 command_runner.go:130] >      osusergo
	I1009 18:21:58.149978   34792 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1009 18:21:58.149982   34792 command_runner.go:130] >      seccomp
	I1009 18:21:58.149988   34792 command_runner.go:130] >      apparmor
	I1009 18:21:58.149991   34792 command_runner.go:130] >      selinux
	I1009 18:21:58.149998   34792 command_runner.go:130] >    LDFlags:          unknown
	I1009 18:21:58.150002   34792 command_runner.go:130] >    SeccompEnabled:   true
	I1009 18:21:58.150007   34792 command_runner.go:130] >    AppArmorEnabled:  false
	I1009 18:21:58.151351   34792 ssh_runner.go:195] Run: crio --version
	I1009 18:21:58.178662   34792 command_runner.go:130] > crio version 1.34.1
	I1009 18:21:58.178683   34792 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1009 18:21:58.178689   34792 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1009 18:21:58.178693   34792 command_runner.go:130] >    GitTreeState:   dirty
	I1009 18:21:58.178698   34792 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1009 18:21:58.178702   34792 command_runner.go:130] >    GoVersion:      go1.24.6
	I1009 18:21:58.178706   34792 command_runner.go:130] >    Compiler:       gc
	I1009 18:21:58.178714   34792 command_runner.go:130] >    Platform:       linux/amd64
	I1009 18:21:58.178718   34792 command_runner.go:130] >    Linkmode:       static
	I1009 18:21:58.178721   34792 command_runner.go:130] >    BuildTags:
	I1009 18:21:58.178724   34792 command_runner.go:130] >      static
	I1009 18:21:58.178728   34792 command_runner.go:130] >      netgo
	I1009 18:21:58.178732   34792 command_runner.go:130] >      osusergo
	I1009 18:21:58.178735   34792 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1009 18:21:58.178739   34792 command_runner.go:130] >      seccomp
	I1009 18:21:58.178742   34792 command_runner.go:130] >      apparmor
	I1009 18:21:58.178757   34792 command_runner.go:130] >      selinux
	I1009 18:21:58.178764   34792 command_runner.go:130] >    LDFlags:          unknown
	I1009 18:21:58.178768   34792 command_runner.go:130] >    SeccompEnabled:   true
	I1009 18:21:58.178771   34792 command_runner.go:130] >    AppArmorEnabled:  false
	I1009 18:21:58.181232   34792 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:21:58.182844   34792 cli_runner.go:164] Run: docker network inspect functional-753440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:21:58.200852   34792 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:21:58.205024   34792 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1009 18:21:58.205096   34792 kubeadm.go:883] updating cluster {Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:21:58.205232   34792 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:21:58.205276   34792 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:21:58.234303   34792 command_runner.go:130] > {
	I1009 18:21:58.234338   34792 command_runner.go:130] >   "images":  [
	I1009 18:21:58.234345   34792 command_runner.go:130] >     {
	I1009 18:21:58.234355   34792 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1009 18:21:58.234362   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.234369   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1009 18:21:58.234373   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234378   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.234388   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1009 18:21:58.234400   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1009 18:21:58.234409   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234417   34792 command_runner.go:130] >       "size":  "109379124",
	I1009 18:21:58.234426   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.234435   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.234443   34792 command_runner.go:130] >     },
	I1009 18:21:58.234449   34792 command_runner.go:130] >     {
	I1009 18:21:58.234460   34792 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1009 18:21:58.234468   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.234478   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1009 18:21:58.234486   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234494   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.234509   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1009 18:21:58.234523   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1009 18:21:58.234532   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234539   34792 command_runner.go:130] >       "size":  "31470524",
	I1009 18:21:58.234548   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.234565   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.234581   34792 command_runner.go:130] >     },
	I1009 18:21:58.234590   34792 command_runner.go:130] >     {
	I1009 18:21:58.234600   34792 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1009 18:21:58.234610   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.234619   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1009 18:21:58.234627   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234635   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.234649   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1009 18:21:58.234665   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1009 18:21:58.234673   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234680   34792 command_runner.go:130] >       "size":  "76103547",
	I1009 18:21:58.234689   34792 command_runner.go:130] >       "username":  "nonroot",
	I1009 18:21:58.234697   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.234713   34792 command_runner.go:130] >     },
	I1009 18:21:58.234721   34792 command_runner.go:130] >     {
	I1009 18:21:58.234731   34792 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1009 18:21:58.234740   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.234749   34792 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1009 18:21:58.234757   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234765   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.234780   34792 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1009 18:21:58.234794   34792 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1009 18:21:58.234802   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234809   34792 command_runner.go:130] >       "size":  "195976448",
	I1009 18:21:58.234817   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.234824   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.234833   34792 command_runner.go:130] >       },
	I1009 18:21:58.234849   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.234858   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.234864   34792 command_runner.go:130] >     },
	I1009 18:21:58.234871   34792 command_runner.go:130] >     {
	I1009 18:21:58.234882   34792 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1009 18:21:58.234891   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.234906   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1009 18:21:58.234914   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234921   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.234936   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1009 18:21:58.234952   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1009 18:21:58.234960   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234967   34792 command_runner.go:130] >       "size":  "89046001",
	I1009 18:21:58.234976   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.234984   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.234991   34792 command_runner.go:130] >       },
	I1009 18:21:58.234999   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.235008   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.235015   34792 command_runner.go:130] >     },
	I1009 18:21:58.235023   34792 command_runner.go:130] >     {
	I1009 18:21:58.235033   34792 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1009 18:21:58.235042   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.235052   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1009 18:21:58.235059   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235065   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.235078   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1009 18:21:58.235098   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1009 18:21:58.235106   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235113   34792 command_runner.go:130] >       "size":  "76004181",
	I1009 18:21:58.235122   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.235130   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.235152   34792 command_runner.go:130] >       },
	I1009 18:21:58.235159   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.235168   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.235174   34792 command_runner.go:130] >     },
	I1009 18:21:58.235183   34792 command_runner.go:130] >     {
	I1009 18:21:58.235193   34792 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1009 18:21:58.235202   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.235211   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1009 18:21:58.235227   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235236   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.235248   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1009 18:21:58.235262   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1009 18:21:58.235271   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235278   34792 command_runner.go:130] >       "size":  "73138073",
	I1009 18:21:58.235286   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.235294   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.235302   34792 command_runner.go:130] >     },
	I1009 18:21:58.235314   34792 command_runner.go:130] >     {
	I1009 18:21:58.235326   34792 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1009 18:21:58.235333   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.235344   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1009 18:21:58.235352   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235359   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.235373   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1009 18:21:58.235408   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1009 18:21:58.235416   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235424   34792 command_runner.go:130] >       "size":  "53844823",
	I1009 18:21:58.235433   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.235441   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.235450   34792 command_runner.go:130] >       },
	I1009 18:21:58.235456   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.235464   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.235470   34792 command_runner.go:130] >     },
	I1009 18:21:58.235477   34792 command_runner.go:130] >     {
	I1009 18:21:58.235488   34792 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1009 18:21:58.235496   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.235508   34792 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1009 18:21:58.235515   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235522   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.235536   34792 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1009 18:21:58.235550   34792 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1009 18:21:58.235566   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235576   34792 command_runner.go:130] >       "size":  "742092",
	I1009 18:21:58.235582   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.235592   34792 command_runner.go:130] >         "value":  "65535"
	I1009 18:21:58.235599   34792 command_runner.go:130] >       },
	I1009 18:21:58.235606   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.235615   34792 command_runner.go:130] >       "pinned":  true
	I1009 18:21:58.235621   34792 command_runner.go:130] >     }
	I1009 18:21:58.235627   34792 command_runner.go:130] >   ]
	I1009 18:21:58.235633   34792 command_runner.go:130] > }
	I1009 18:21:58.236008   34792 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:21:58.236027   34792 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:21:58.236090   34792 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:21:58.260405   34792 command_runner.go:130] > {
	I1009 18:21:58.260434   34792 command_runner.go:130] >   "images":  [
	I1009 18:21:58.260440   34792 command_runner.go:130] >     {
	I1009 18:21:58.260454   34792 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1009 18:21:58.260464   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.260473   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1009 18:21:58.260483   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260490   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.260505   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1009 18:21:58.260520   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1009 18:21:58.260529   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260540   34792 command_runner.go:130] >       "size":  "109379124",
	I1009 18:21:58.260550   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.260560   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.260566   34792 command_runner.go:130] >     },
	I1009 18:21:58.260575   34792 command_runner.go:130] >     {
	I1009 18:21:58.260586   34792 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1009 18:21:58.260593   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.260606   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1009 18:21:58.260615   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260624   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.260639   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1009 18:21:58.260653   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1009 18:21:58.260661   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260667   34792 command_runner.go:130] >       "size":  "31470524",
	I1009 18:21:58.260674   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.260681   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.260689   34792 command_runner.go:130] >     },
	I1009 18:21:58.260698   34792 command_runner.go:130] >     {
	I1009 18:21:58.260711   34792 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1009 18:21:58.260721   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.260732   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1009 18:21:58.260740   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260746   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.260759   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1009 18:21:58.260769   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1009 18:21:58.260777   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260785   34792 command_runner.go:130] >       "size":  "76103547",
	I1009 18:21:58.260794   34792 command_runner.go:130] >       "username":  "nonroot",
	I1009 18:21:58.260804   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.260812   34792 command_runner.go:130] >     },
	I1009 18:21:58.260817   34792 command_runner.go:130] >     {
	I1009 18:21:58.260829   34792 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1009 18:21:58.260838   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.260848   34792 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1009 18:21:58.260854   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260861   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.260876   34792 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1009 18:21:58.260890   34792 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1009 18:21:58.260897   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260904   34792 command_runner.go:130] >       "size":  "195976448",
	I1009 18:21:58.260914   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.260923   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.260931   34792 command_runner.go:130] >       },
	I1009 18:21:58.260939   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.260949   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.260957   34792 command_runner.go:130] >     },
	I1009 18:21:58.260965   34792 command_runner.go:130] >     {
	I1009 18:21:58.260974   34792 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1009 18:21:58.260984   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.260992   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1009 18:21:58.261000   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261007   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.261018   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1009 18:21:58.261032   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1009 18:21:58.261040   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261047   34792 command_runner.go:130] >       "size":  "89046001",
	I1009 18:21:58.261056   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.261066   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.261073   34792 command_runner.go:130] >       },
	I1009 18:21:58.261083   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.261093   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.261101   34792 command_runner.go:130] >     },
	I1009 18:21:58.261107   34792 command_runner.go:130] >     {
	I1009 18:21:58.261119   34792 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1009 18:21:58.261128   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.261153   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1009 18:21:58.261159   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261169   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.261181   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1009 18:21:58.261196   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1009 18:21:58.261205   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261214   34792 command_runner.go:130] >       "size":  "76004181",
	I1009 18:21:58.261223   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.261234   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.261243   34792 command_runner.go:130] >       },
	I1009 18:21:58.261249   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.261258   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.261266   34792 command_runner.go:130] >     },
	I1009 18:21:58.261270   34792 command_runner.go:130] >     {
	I1009 18:21:58.261283   34792 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1009 18:21:58.261295   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.261306   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1009 18:21:58.261314   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261321   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.261334   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1009 18:21:58.261349   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1009 18:21:58.261356   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261364   34792 command_runner.go:130] >       "size":  "73138073",
	I1009 18:21:58.261372   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.261379   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.261384   34792 command_runner.go:130] >     },
	I1009 18:21:58.261393   34792 command_runner.go:130] >     {
	I1009 18:21:58.261402   34792 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1009 18:21:58.261409   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.261417   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1009 18:21:58.261422   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261428   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.261439   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1009 18:21:58.261460   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1009 18:21:58.261467   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261473   34792 command_runner.go:130] >       "size":  "53844823",
	I1009 18:21:58.261482   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.261491   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.261498   34792 command_runner.go:130] >       },
	I1009 18:21:58.261507   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.261516   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.261525   34792 command_runner.go:130] >     },
	I1009 18:21:58.261533   34792 command_runner.go:130] >     {
	I1009 18:21:58.261543   34792 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1009 18:21:58.261549   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.261555   34792 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1009 18:21:58.261563   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261570   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.261584   34792 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1009 18:21:58.261597   34792 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1009 18:21:58.261607   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261614   34792 command_runner.go:130] >       "size":  "742092",
	I1009 18:21:58.261620   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.261626   34792 command_runner.go:130] >         "value":  "65535"
	I1009 18:21:58.261632   34792 command_runner.go:130] >       },
	I1009 18:21:58.261636   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.261641   34792 command_runner.go:130] >       "pinned":  true
	I1009 18:21:58.261649   34792 command_runner.go:130] >     }
	I1009 18:21:58.261655   34792 command_runner.go:130] >   ]
	I1009 18:21:58.261663   34792 command_runner.go:130] > }
	I1009 18:21:58.262011   34792 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:21:58.262027   34792 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:21:58.262034   34792 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1009 18:21:58.262124   34792 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-753440 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:21:58.262213   34792 ssh_runner.go:195] Run: crio config
	I1009 18:21:58.302300   34792 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1009 18:21:58.302331   34792 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1009 18:21:58.302340   34792 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1009 18:21:58.302345   34792 command_runner.go:130] > #
	I1009 18:21:58.302356   34792 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1009 18:21:58.302365   34792 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1009 18:21:58.302374   34792 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1009 18:21:58.302388   34792 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1009 18:21:58.302395   34792 command_runner.go:130] > # reload'.
	I1009 18:21:58.302413   34792 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1009 18:21:58.302424   34792 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1009 18:21:58.302434   34792 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1009 18:21:58.302446   34792 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1009 18:21:58.302451   34792 command_runner.go:130] > [crio]
	I1009 18:21:58.302460   34792 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1009 18:21:58.302491   34792 command_runner.go:130] > # containers images, in this directory.
	I1009 18:21:58.302515   34792 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1009 18:21:58.302526   34792 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1009 18:21:58.302534   34792 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1009 18:21:58.302549   34792 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1009 18:21:58.302558   34792 command_runner.go:130] > # imagestore = ""
	I1009 18:21:58.302569   34792 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1009 18:21:58.302588   34792 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1009 18:21:58.302596   34792 command_runner.go:130] > # storage_driver = "overlay"
	I1009 18:21:58.302604   34792 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1009 18:21:58.302618   34792 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1009 18:21:58.302625   34792 command_runner.go:130] > # storage_option = [
	I1009 18:21:58.302630   34792 command_runner.go:130] > # ]
	I1009 18:21:58.302640   34792 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1009 18:21:58.302649   34792 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1009 18:21:58.302660   34792 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1009 18:21:58.302668   34792 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1009 18:21:58.302681   34792 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1009 18:21:58.302689   34792 command_runner.go:130] > # always happen on a node reboot
	I1009 18:21:58.302700   34792 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1009 18:21:58.302714   34792 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1009 18:21:58.302727   34792 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1009 18:21:58.302738   34792 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1009 18:21:58.302745   34792 command_runner.go:130] > # version_file_persist = ""
	I1009 18:21:58.302760   34792 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1009 18:21:58.302779   34792 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1009 18:21:58.302786   34792 command_runner.go:130] > # internal_wipe = true
	I1009 18:21:58.302800   34792 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1009 18:21:58.302809   34792 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1009 18:21:58.302823   34792 command_runner.go:130] > # internal_repair = true
	I1009 18:21:58.302832   34792 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1009 18:21:58.302841   34792 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1009 18:21:58.302850   34792 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1009 18:21:58.302858   34792 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1009 18:21:58.302871   34792 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1009 18:21:58.302877   34792 command_runner.go:130] > [crio.api]
	I1009 18:21:58.302889   34792 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1009 18:21:58.302895   34792 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1009 18:21:58.302903   34792 command_runner.go:130] > # IP address on which the stream server will listen.
	I1009 18:21:58.302908   34792 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1009 18:21:58.302918   34792 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1009 18:21:58.302922   34792 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1009 18:21:58.302928   34792 command_runner.go:130] > # stream_port = "0"
	I1009 18:21:58.302935   34792 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1009 18:21:58.302943   34792 command_runner.go:130] > # stream_enable_tls = false
	I1009 18:21:58.302953   34792 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1009 18:21:58.302963   34792 command_runner.go:130] > # stream_idle_timeout = ""
	I1009 18:21:58.302972   34792 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1009 18:21:58.302984   34792 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1009 18:21:58.303003   34792 command_runner.go:130] > # stream_tls_cert = ""
	I1009 18:21:58.303014   34792 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1009 18:21:58.303019   34792 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1009 18:21:58.303024   34792 command_runner.go:130] > # stream_tls_key = ""
	I1009 18:21:58.303031   34792 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1009 18:21:58.303041   34792 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1009 18:21:58.303054   34792 command_runner.go:130] > # automatically pick up the changes.
	I1009 18:21:58.303061   34792 command_runner.go:130] > # stream_tls_ca = ""
	I1009 18:21:58.303083   34792 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1009 18:21:58.303094   34792 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1009 18:21:58.303103   34792 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1009 18:21:58.303111   34792 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1009 18:21:58.303120   34792 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1009 18:21:58.303130   34792 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1009 18:21:58.303156   34792 command_runner.go:130] > [crio.runtime]
	I1009 18:21:58.303167   34792 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1009 18:21:58.303176   34792 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1009 18:21:58.303182   34792 command_runner.go:130] > # "nofile=1024:2048"
	I1009 18:21:58.303192   34792 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1009 18:21:58.303201   34792 command_runner.go:130] > # default_ulimits = [
	I1009 18:21:58.303207   34792 command_runner.go:130] > # ]
	I1009 18:21:58.303219   34792 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1009 18:21:58.303225   34792 command_runner.go:130] > # no_pivot = false
	I1009 18:21:58.303234   34792 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1009 18:21:58.303261   34792 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1009 18:21:58.303272   34792 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1009 18:21:58.303282   34792 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1009 18:21:58.303294   34792 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1009 18:21:58.303307   34792 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1009 18:21:58.303315   34792 command_runner.go:130] > # conmon = ""
	I1009 18:21:58.303321   34792 command_runner.go:130] > # Cgroup setting for conmon
	I1009 18:21:58.303330   34792 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1009 18:21:58.303336   34792 command_runner.go:130] > conmon_cgroup = "pod"
	I1009 18:21:58.303344   34792 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1009 18:21:58.303351   34792 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1009 18:21:58.303361   34792 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1009 18:21:58.303366   34792 command_runner.go:130] > # conmon_env = [
	I1009 18:21:58.303370   34792 command_runner.go:130] > # ]
	I1009 18:21:58.303377   34792 command_runner.go:130] > # Additional environment variables to set for all the
	I1009 18:21:58.303389   34792 command_runner.go:130] > # containers. These are overridden if set in the
	I1009 18:21:58.303398   34792 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1009 18:21:58.303404   34792 command_runner.go:130] > # default_env = [
	I1009 18:21:58.303408   34792 command_runner.go:130] > # ]
	I1009 18:21:58.303417   34792 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1009 18:21:58.303434   34792 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1009 18:21:58.303443   34792 command_runner.go:130] > # selinux = false
	I1009 18:21:58.303454   34792 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1009 18:21:58.303468   34792 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1009 18:21:58.303479   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.303489   34792 command_runner.go:130] > # seccomp_profile = ""
	I1009 18:21:58.303500   34792 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1009 18:21:58.303513   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.303520   34792 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1009 18:21:58.303530   34792 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1009 18:21:58.303543   34792 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1009 18:21:58.303553   34792 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1009 18:21:58.303567   34792 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1009 18:21:58.303578   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.303586   34792 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1009 18:21:58.303597   34792 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1009 18:21:58.303603   34792 command_runner.go:130] > # the cgroup blockio controller.
	I1009 18:21:58.303610   34792 command_runner.go:130] > # blockio_config_file = ""
	I1009 18:21:58.303625   34792 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1009 18:21:58.303631   34792 command_runner.go:130] > # blockio parameters.
	I1009 18:21:58.303639   34792 command_runner.go:130] > # blockio_reload = false
	I1009 18:21:58.303649   34792 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1009 18:21:58.303659   34792 command_runner.go:130] > # irqbalance daemon.
	I1009 18:21:58.303667   34792 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1009 18:21:58.303718   34792 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1009 18:21:58.303738   34792 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1009 18:21:58.303748   34792 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1009 18:21:58.303756   34792 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1009 18:21:58.303765   34792 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1009 18:21:58.303772   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.303777   34792 command_runner.go:130] > # rdt_config_file = ""
	I1009 18:21:58.303787   34792 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1009 18:21:58.303793   34792 command_runner.go:130] > # cgroup_manager = "systemd"
	I1009 18:21:58.303802   34792 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1009 18:21:58.303809   34792 command_runner.go:130] > # separate_pull_cgroup = ""
	I1009 18:21:58.303817   34792 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1009 18:21:58.303827   34792 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1009 18:21:58.303836   34792 command_runner.go:130] > # will be added.
	I1009 18:21:58.303844   34792 command_runner.go:130] > # default_capabilities = [
	I1009 18:21:58.303853   34792 command_runner.go:130] > # 	"CHOWN",
	I1009 18:21:58.303860   34792 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1009 18:21:58.303868   34792 command_runner.go:130] > # 	"FSETID",
	I1009 18:21:58.303874   34792 command_runner.go:130] > # 	"FOWNER",
	I1009 18:21:58.303883   34792 command_runner.go:130] > # 	"SETGID",
	I1009 18:21:58.303899   34792 command_runner.go:130] > # 	"SETUID",
	I1009 18:21:58.303908   34792 command_runner.go:130] > # 	"SETPCAP",
	I1009 18:21:58.303916   34792 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1009 18:21:58.303925   34792 command_runner.go:130] > # 	"KILL",
	I1009 18:21:58.303931   34792 command_runner.go:130] > # ]
	I1009 18:21:58.303944   34792 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1009 18:21:58.303958   34792 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1009 18:21:58.303969   34792 command_runner.go:130] > # add_inheritable_capabilities = false
	I1009 18:21:58.303982   34792 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1009 18:21:58.304001   34792 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1009 18:21:58.304011   34792 command_runner.go:130] > default_sysctls = [
	I1009 18:21:58.304018   34792 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1009 18:21:58.304025   34792 command_runner.go:130] > ]
	I1009 18:21:58.304033   34792 command_runner.go:130] > # List of devices on the host that a
	I1009 18:21:58.304046   34792 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1009 18:21:58.304055   34792 command_runner.go:130] > # allowed_devices = [
	I1009 18:21:58.304063   34792 command_runner.go:130] > # 	"/dev/fuse",
	I1009 18:21:58.304071   34792 command_runner.go:130] > # 	"/dev/net/tun",
	I1009 18:21:58.304077   34792 command_runner.go:130] > # ]
	I1009 18:21:58.304088   34792 command_runner.go:130] > # List of additional devices. specified as
	I1009 18:21:58.304102   34792 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1009 18:21:58.304113   34792 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1009 18:21:58.304124   34792 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1009 18:21:58.304153   34792 command_runner.go:130] > # additional_devices = [
	I1009 18:21:58.304163   34792 command_runner.go:130] > # ]
	I1009 18:21:58.304172   34792 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1009 18:21:58.304182   34792 command_runner.go:130] > # cdi_spec_dirs = [
	I1009 18:21:58.304188   34792 command_runner.go:130] > # 	"/etc/cdi",
	I1009 18:21:58.304197   34792 command_runner.go:130] > # 	"/var/run/cdi",
	I1009 18:21:58.304202   34792 command_runner.go:130] > # ]
	I1009 18:21:58.304212   34792 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1009 18:21:58.304225   34792 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1009 18:21:58.304234   34792 command_runner.go:130] > # Defaults to false.
	I1009 18:21:58.304243   34792 command_runner.go:130] > # device_ownership_from_security_context = false
	I1009 18:21:58.304257   34792 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1009 18:21:58.304269   34792 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1009 18:21:58.304278   34792 command_runner.go:130] > # hooks_dir = [
	I1009 18:21:58.304287   34792 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1009 18:21:58.304294   34792 command_runner.go:130] > # ]
	I1009 18:21:58.304304   34792 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1009 18:21:58.304317   34792 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1009 18:21:58.304329   34792 command_runner.go:130] > # its default mounts from the following two files:
	I1009 18:21:58.304337   34792 command_runner.go:130] > #
	I1009 18:21:58.304347   34792 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1009 18:21:58.304361   34792 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1009 18:21:58.304382   34792 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1009 18:21:58.304389   34792 command_runner.go:130] > #
	I1009 18:21:58.304399   34792 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1009 18:21:58.304413   34792 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1009 18:21:58.304427   34792 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1009 18:21:58.304438   34792 command_runner.go:130] > #      only add mounts it finds in this file.
	I1009 18:21:58.304447   34792 command_runner.go:130] > #
	I1009 18:21:58.304455   34792 command_runner.go:130] > # default_mounts_file = ""
	I1009 18:21:58.304466   34792 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1009 18:21:58.304479   34792 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1009 18:21:58.304494   34792 command_runner.go:130] > # pids_limit = -1
	I1009 18:21:58.304508   34792 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1009 18:21:58.304521   34792 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1009 18:21:58.304532   34792 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1009 18:21:58.304547   34792 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1009 18:21:58.304557   34792 command_runner.go:130] > # log_size_max = -1
	I1009 18:21:58.304569   34792 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1009 18:21:58.304578   34792 command_runner.go:130] > # log_to_journald = false
	I1009 18:21:58.304601   34792 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1009 18:21:58.304614   34792 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1009 18:21:58.304622   34792 command_runner.go:130] > # Path to directory for container attach sockets.
	I1009 18:21:58.304634   34792 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1009 18:21:58.304647   34792 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1009 18:21:58.304657   34792 command_runner.go:130] > # bind_mount_prefix = ""
	I1009 18:21:58.304669   34792 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1009 18:21:58.304677   34792 command_runner.go:130] > # read_only = false
	I1009 18:21:58.304688   34792 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1009 18:21:58.304700   34792 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1009 18:21:58.304708   34792 command_runner.go:130] > # live configuration reload.
	I1009 18:21:58.304716   34792 command_runner.go:130] > # log_level = "info"
	I1009 18:21:58.304726   34792 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1009 18:21:58.304737   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.304746   34792 command_runner.go:130] > # log_filter = ""
	I1009 18:21:58.304761   34792 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1009 18:21:58.304773   34792 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1009 18:21:58.304781   34792 command_runner.go:130] > # separated by comma.
	I1009 18:21:58.304795   34792 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 18:21:58.304805   34792 command_runner.go:130] > # uid_mappings = ""
	I1009 18:21:58.304815   34792 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1009 18:21:58.304827   34792 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1009 18:21:58.304837   34792 command_runner.go:130] > # separated by comma.
	I1009 18:21:58.304849   34792 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 18:21:58.304863   34792 command_runner.go:130] > # gid_mappings = ""
	I1009 18:21:58.304890   34792 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1009 18:21:58.304904   34792 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1009 18:21:58.304916   34792 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1009 18:21:58.304929   34792 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 18:21:58.304939   34792 command_runner.go:130] > # minimum_mappable_uid = -1
	I1009 18:21:58.304949   34792 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1009 18:21:58.304961   34792 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1009 18:21:58.304971   34792 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1009 18:21:58.304986   34792 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 18:21:58.305032   34792 command_runner.go:130] > # minimum_mappable_gid = -1
	I1009 18:21:58.305045   34792 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1009 18:21:58.305054   34792 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1009 18:21:58.305063   34792 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1009 18:21:58.305074   34792 command_runner.go:130] > # ctr_stop_timeout = 30
	I1009 18:21:58.305084   34792 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1009 18:21:58.305097   34792 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1009 18:21:58.305106   34792 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1009 18:21:58.305116   34792 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1009 18:21:58.305124   34792 command_runner.go:130] > # drop_infra_ctr = true
	I1009 18:21:58.305148   34792 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1009 18:21:58.305162   34792 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1009 18:21:58.305177   34792 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1009 18:21:58.305185   34792 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1009 18:21:58.305197   34792 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1009 18:21:58.305209   34792 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1009 18:21:58.305222   34792 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1009 18:21:58.305233   34792 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1009 18:21:58.305241   34792 command_runner.go:130] > # shared_cpuset = ""
	I1009 18:21:58.305251   34792 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1009 18:21:58.305262   34792 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1009 18:21:58.305270   34792 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1009 18:21:58.305284   34792 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1009 18:21:58.305293   34792 command_runner.go:130] > # pinns_path = ""
	I1009 18:21:58.305305   34792 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1009 18:21:58.305318   34792 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1009 18:21:58.305328   34792 command_runner.go:130] > # enable_criu_support = true
	I1009 18:21:58.305337   34792 command_runner.go:130] > # Enable/disable the generation of the container,
	I1009 18:21:58.305350   34792 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1009 18:21:58.305359   34792 command_runner.go:130] > # enable_pod_events = false
	I1009 18:21:58.305371   34792 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1009 18:21:58.305382   34792 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1009 18:21:58.305389   34792 command_runner.go:130] > # default_runtime = "crun"
	I1009 18:21:58.305401   34792 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1009 18:21:58.305415   34792 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1009 18:21:58.305432   34792 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1009 18:21:58.305444   34792 command_runner.go:130] > # creation as a file is not desired either.
	I1009 18:21:58.305460   34792 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1009 18:21:58.305471   34792 command_runner.go:130] > # the hostname is being managed dynamically.
	I1009 18:21:58.305480   34792 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1009 18:21:58.305488   34792 command_runner.go:130] > # ]
	I1009 18:21:58.305499   34792 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1009 18:21:58.305512   34792 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1009 18:21:58.305524   34792 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1009 18:21:58.305535   34792 command_runner.go:130] > # Each entry in the table should follow the format:
	I1009 18:21:58.305542   34792 command_runner.go:130] > #
	I1009 18:21:58.305551   34792 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1009 18:21:58.305561   34792 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1009 18:21:58.305570   34792 command_runner.go:130] > # runtime_type = "oci"
	I1009 18:21:58.305582   34792 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1009 18:21:58.305590   34792 command_runner.go:130] > # inherit_default_runtime = false
	I1009 18:21:58.305601   34792 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1009 18:21:58.305611   34792 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1009 18:21:58.305619   34792 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1009 18:21:58.305628   34792 command_runner.go:130] > # monitor_env = []
	I1009 18:21:58.305638   34792 command_runner.go:130] > # privileged_without_host_devices = false
	I1009 18:21:58.305647   34792 command_runner.go:130] > # allowed_annotations = []
	I1009 18:21:58.305665   34792 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1009 18:21:58.305674   34792 command_runner.go:130] > # no_sync_log = false
	I1009 18:21:58.305681   34792 command_runner.go:130] > # default_annotations = {}
	I1009 18:21:58.305690   34792 command_runner.go:130] > # stream_websockets = false
	I1009 18:21:58.305697   34792 command_runner.go:130] > # seccomp_profile = ""
	I1009 18:21:58.305730   34792 command_runner.go:130] > # Where:
	I1009 18:21:58.305743   34792 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1009 18:21:58.305756   34792 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1009 18:21:58.305769   34792 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1009 18:21:58.305779   34792 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1009 18:21:58.305788   34792 command_runner.go:130] > #   in $PATH.
	I1009 18:21:58.305800   34792 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1009 18:21:58.305811   34792 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1009 18:21:58.305823   34792 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1009 18:21:58.305832   34792 command_runner.go:130] > #   state.
	I1009 18:21:58.305842   34792 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1009 18:21:58.305854   34792 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1009 18:21:58.305865   34792 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1009 18:21:58.305877   34792 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1009 18:21:58.305888   34792 command_runner.go:130] > #   the values from the default runtime on load time.
	I1009 18:21:58.305902   34792 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1009 18:21:58.305914   34792 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1009 18:21:58.305928   34792 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1009 18:21:58.305940   34792 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1009 18:21:58.305948   34792 command_runner.go:130] > #   The currently recognized values are:
	I1009 18:21:58.305962   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1009 18:21:58.305977   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1009 18:21:58.305989   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1009 18:21:58.306007   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1009 18:21:58.306022   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1009 18:21:58.306036   34792 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1009 18:21:58.306050   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1009 18:21:58.306061   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1009 18:21:58.306082   34792 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1009 18:21:58.306095   34792 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1009 18:21:58.306109   34792 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1009 18:21:58.306121   34792 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1009 18:21:58.306132   34792 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1009 18:21:58.306154   34792 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1009 18:21:58.306166   34792 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1009 18:21:58.306181   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1009 18:21:58.306194   34792 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1009 18:21:58.306204   34792 command_runner.go:130] > #   deprecated option "conmon".
	I1009 18:21:58.306216   34792 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1009 18:21:58.306226   34792 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1009 18:21:58.306240   34792 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1009 18:21:58.306250   34792 command_runner.go:130] > #   should be moved to the container's cgroup
	I1009 18:21:58.306260   34792 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1009 18:21:58.306271   34792 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1009 18:21:58.306285   34792 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1009 18:21:58.306294   34792 command_runner.go:130] > #   conmon-rs by using:
	I1009 18:21:58.306306   34792 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1009 18:21:58.306321   34792 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1009 18:21:58.306336   34792 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1009 18:21:58.306350   34792 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1009 18:21:58.306363   34792 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1009 18:21:58.306378   34792 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1009 18:21:58.306392   34792 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1009 18:21:58.306402   34792 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1009 18:21:58.306417   34792 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1009 18:21:58.306431   34792 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1009 18:21:58.306441   34792 command_runner.go:130] > #   when a machine crash happens.
	I1009 18:21:58.306452   34792 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1009 18:21:58.306467   34792 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1009 18:21:58.306481   34792 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1009 18:21:58.306492   34792 command_runner.go:130] > #   seccomp profile for the runtime.
	I1009 18:21:58.306506   34792 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1009 18:21:58.306520   34792 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1009 18:21:58.306525   34792 command_runner.go:130] > #
	I1009 18:21:58.306534   34792 command_runner.go:130] > # Using the seccomp notifier feature:
	I1009 18:21:58.306542   34792 command_runner.go:130] > #
	I1009 18:21:58.306552   34792 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1009 18:21:58.306565   34792 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1009 18:21:58.306574   34792 command_runner.go:130] > #
	I1009 18:21:58.306584   34792 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1009 18:21:58.306597   34792 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1009 18:21:58.306605   34792 command_runner.go:130] > #
	I1009 18:21:58.306615   34792 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1009 18:21:58.306623   34792 command_runner.go:130] > # feature.
	I1009 18:21:58.306629   34792 command_runner.go:130] > #
	I1009 18:21:58.306641   34792 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1009 18:21:58.306654   34792 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1009 18:21:58.306667   34792 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1009 18:21:58.306680   34792 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1009 18:21:58.306692   34792 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1009 18:21:58.306700   34792 command_runner.go:130] > #
	I1009 18:21:58.306710   34792 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1009 18:21:58.306723   34792 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1009 18:21:58.306730   34792 command_runner.go:130] > #
	I1009 18:21:58.306740   34792 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1009 18:21:58.306752   34792 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1009 18:21:58.306760   34792 command_runner.go:130] > #
	I1009 18:21:58.306770   34792 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1009 18:21:58.306782   34792 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1009 18:21:58.306788   34792 command_runner.go:130] > # limitation.
	I1009 18:21:58.306798   34792 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1009 18:21:58.306809   34792 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1009 18:21:58.306818   34792 command_runner.go:130] > runtime_type = ""
	I1009 18:21:58.306825   34792 command_runner.go:130] > runtime_root = "/run/crun"
	I1009 18:21:58.306837   34792 command_runner.go:130] > inherit_default_runtime = false
	I1009 18:21:58.306847   34792 command_runner.go:130] > runtime_config_path = ""
	I1009 18:21:58.306853   34792 command_runner.go:130] > container_min_memory = ""
	I1009 18:21:58.306863   34792 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1009 18:21:58.306870   34792 command_runner.go:130] > monitor_cgroup = "pod"
	I1009 18:21:58.306879   34792 command_runner.go:130] > monitor_exec_cgroup = ""
	I1009 18:21:58.306888   34792 command_runner.go:130] > allowed_annotations = [
	I1009 18:21:58.306898   34792 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1009 18:21:58.306904   34792 command_runner.go:130] > ]
	I1009 18:21:58.306914   34792 command_runner.go:130] > privileged_without_host_devices = false
	I1009 18:21:58.306921   34792 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1009 18:21:58.306931   34792 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1009 18:21:58.306937   34792 command_runner.go:130] > runtime_type = ""
	I1009 18:21:58.306944   34792 command_runner.go:130] > runtime_root = "/run/runc"
	I1009 18:21:58.306952   34792 command_runner.go:130] > inherit_default_runtime = false
	I1009 18:21:58.306962   34792 command_runner.go:130] > runtime_config_path = ""
	I1009 18:21:58.306970   34792 command_runner.go:130] > container_min_memory = ""
	I1009 18:21:58.306980   34792 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1009 18:21:58.306989   34792 command_runner.go:130] > monitor_cgroup = "pod"
	I1009 18:21:58.307006   34792 command_runner.go:130] > monitor_exec_cgroup = ""
	I1009 18:21:58.307017   34792 command_runner.go:130] > privileged_without_host_devices = false
	I1009 18:21:58.307031   34792 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1009 18:21:58.307040   34792 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1009 18:21:58.307053   34792 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1009 18:21:58.307068   34792 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1009 18:21:58.307088   34792 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1009 18:21:58.307107   34792 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1009 18:21:58.307121   34792 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1009 18:21:58.307130   34792 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1009 18:21:58.307160   34792 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1009 18:21:58.307179   34792 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1009 18:21:58.307192   34792 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1009 18:21:58.307206   34792 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1009 18:21:58.307215   34792 command_runner.go:130] > # Example:
	I1009 18:21:58.307224   34792 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1009 18:21:58.307234   34792 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1009 18:21:58.307244   34792 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1009 18:21:58.307253   34792 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1009 18:21:58.307262   34792 command_runner.go:130] > # cpuset = "0-1"
	I1009 18:21:58.307269   34792 command_runner.go:130] > # cpushares = "5"
	I1009 18:21:58.307278   34792 command_runner.go:130] > # cpuquota = "1000"
	I1009 18:21:58.307285   34792 command_runner.go:130] > # cpuperiod = "100000"
	I1009 18:21:58.307294   34792 command_runner.go:130] > # cpulimit = "35"
	I1009 18:21:58.307301   34792 command_runner.go:130] > # Where:
	I1009 18:21:58.307309   34792 command_runner.go:130] > # The workload name is workload-type.
	I1009 18:21:58.307323   34792 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1009 18:21:58.307336   34792 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1009 18:21:58.307349   34792 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1009 18:21:58.307365   34792 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1009 18:21:58.307377   34792 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1009 18:21:58.307388   34792 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1009 18:21:58.307399   34792 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1009 18:21:58.307410   34792 command_runner.go:130] > # Default value is set to true
	I1009 18:21:58.307418   34792 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1009 18:21:58.307430   34792 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1009 18:21:58.307440   34792 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1009 18:21:58.307449   34792 command_runner.go:130] > # Default value is set to 'false'
	I1009 18:21:58.307462   34792 command_runner.go:130] > # disable_hostport_mapping = false
	I1009 18:21:58.307474   34792 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1009 18:21:58.307487   34792 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1009 18:21:58.307495   34792 command_runner.go:130] > # timezone = ""
	I1009 18:21:58.307506   34792 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1009 18:21:58.307513   34792 command_runner.go:130] > #
	I1009 18:21:58.307523   34792 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1009 18:21:58.307536   34792 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1009 18:21:58.307544   34792 command_runner.go:130] > [crio.image]
	I1009 18:21:58.307556   34792 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1009 18:21:58.307566   34792 command_runner.go:130] > # default_transport = "docker://"
	I1009 18:21:58.307578   34792 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1009 18:21:58.307591   34792 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1009 18:21:58.307600   34792 command_runner.go:130] > # global_auth_file = ""
	I1009 18:21:58.307608   34792 command_runner.go:130] > # The image used to instantiate infra containers.
	I1009 18:21:58.307620   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.307630   34792 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1009 18:21:58.307641   34792 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1009 18:21:58.307654   34792 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1009 18:21:58.307665   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.307675   34792 command_runner.go:130] > # pause_image_auth_file = ""
	I1009 18:21:58.307686   34792 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1009 18:21:58.307698   34792 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1009 18:21:58.307708   34792 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1009 18:21:58.307719   34792 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1009 18:21:58.307727   34792 command_runner.go:130] > # pause_command = "/pause"
	I1009 18:21:58.307740   34792 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1009 18:21:58.307753   34792 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1009 18:21:58.307765   34792 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1009 18:21:58.307777   34792 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1009 18:21:58.307789   34792 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1009 18:21:58.307802   34792 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1009 18:21:58.307811   34792 command_runner.go:130] > # pinned_images = [
	I1009 18:21:58.307819   34792 command_runner.go:130] > # ]
	I1009 18:21:58.307830   34792 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1009 18:21:58.307842   34792 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1009 18:21:58.307855   34792 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1009 18:21:58.307868   34792 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1009 18:21:58.307879   34792 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1009 18:21:58.307887   34792 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1009 18:21:58.307899   34792 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1009 18:21:58.307912   34792 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1009 18:21:58.307930   34792 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1009 18:21:58.307943   34792 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1009 18:21:58.307955   34792 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1009 18:21:58.307971   34792 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1009 18:21:58.307982   34792 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1009 18:21:58.308001   34792 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1009 18:21:58.308010   34792 command_runner.go:130] > # changing them here.
	I1009 18:21:58.308020   34792 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1009 18:21:58.308029   34792 command_runner.go:130] > # insecure_registries = [
	I1009 18:21:58.308035   34792 command_runner.go:130] > # ]
	I1009 18:21:58.308049   34792 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1009 18:21:58.308059   34792 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1009 18:21:58.308067   34792 command_runner.go:130] > # image_volumes = "mkdir"
	I1009 18:21:58.308079   34792 command_runner.go:130] > # Temporary directory to use for storing big files
	I1009 18:21:58.308089   34792 command_runner.go:130] > # big_files_temporary_dir = ""
	I1009 18:21:58.308100   34792 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1009 18:21:58.308114   34792 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1009 18:21:58.308123   34792 command_runner.go:130] > # auto_reload_registries = false
	I1009 18:21:58.308133   34792 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1009 18:21:58.308163   34792 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1009 18:21:58.308174   34792 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1009 18:21:58.308183   34792 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1009 18:21:58.308191   34792 command_runner.go:130] > # The mode of short name resolution.
	I1009 18:21:58.308205   34792 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1009 18:21:58.308219   34792 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1009 18:21:58.308230   34792 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1009 18:21:58.308238   34792 command_runner.go:130] > # short_name_mode = "enforcing"
	I1009 18:21:58.308250   34792 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1009 18:21:58.308261   34792 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1009 18:21:58.308271   34792 command_runner.go:130] > # oci_artifact_mount_support = true
	I1009 18:21:58.308282   34792 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1009 18:21:58.308291   34792 command_runner.go:130] > # CNI plugins.
	I1009 18:21:58.308297   34792 command_runner.go:130] > [crio.network]
	I1009 18:21:58.308312   34792 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1009 18:21:58.308324   34792 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1009 18:21:58.308334   34792 command_runner.go:130] > # cni_default_network = ""
	I1009 18:21:58.308345   34792 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1009 18:21:58.308355   34792 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1009 18:21:58.308365   34792 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1009 18:21:58.308373   34792 command_runner.go:130] > # plugin_dirs = [
	I1009 18:21:58.308380   34792 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1009 18:21:58.308388   34792 command_runner.go:130] > # ]
	I1009 18:21:58.308395   34792 command_runner.go:130] > # List of included pod metrics.
	I1009 18:21:58.308404   34792 command_runner.go:130] > # included_pod_metrics = [
	I1009 18:21:58.308411   34792 command_runner.go:130] > # ]
	I1009 18:21:58.308423   34792 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1009 18:21:58.308429   34792 command_runner.go:130] > [crio.metrics]
	I1009 18:21:58.308440   34792 command_runner.go:130] > # Globally enable or disable metrics support.
	I1009 18:21:58.308447   34792 command_runner.go:130] > # enable_metrics = false
	I1009 18:21:58.308457   34792 command_runner.go:130] > # Specify enabled metrics collectors.
	I1009 18:21:58.308466   34792 command_runner.go:130] > # Per default all metrics are enabled.
	I1009 18:21:58.308479   34792 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1009 18:21:58.308492   34792 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1009 18:21:58.308504   34792 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1009 18:21:58.308514   34792 command_runner.go:130] > # metrics_collectors = [
	I1009 18:21:58.308520   34792 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1009 18:21:58.308525   34792 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1009 18:21:58.308530   34792 command_runner.go:130] > # 	"containers_oom_total",
	I1009 18:21:58.308535   34792 command_runner.go:130] > # 	"processes_defunct",
	I1009 18:21:58.308540   34792 command_runner.go:130] > # 	"operations_total",
	I1009 18:21:58.308546   34792 command_runner.go:130] > # 	"operations_latency_seconds",
	I1009 18:21:58.308553   34792 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1009 18:21:58.308560   34792 command_runner.go:130] > # 	"operations_errors_total",
	I1009 18:21:58.308567   34792 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1009 18:21:58.308574   34792 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1009 18:21:58.308581   34792 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1009 18:21:58.308590   34792 command_runner.go:130] > # 	"image_pulls_success_total",
	I1009 18:21:58.308598   34792 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1009 18:21:58.308605   34792 command_runner.go:130] > # 	"containers_oom_count_total",
	I1009 18:21:58.308613   34792 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1009 18:21:58.308620   34792 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1009 18:21:58.308630   34792 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1009 18:21:58.308635   34792 command_runner.go:130] > # ]
	I1009 18:21:58.308646   34792 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1009 18:21:58.308656   34792 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1009 18:21:58.308664   34792 command_runner.go:130] > # The port on which the metrics server will listen.
	I1009 18:21:58.308673   34792 command_runner.go:130] > # metrics_port = 9090
	I1009 18:21:58.308682   34792 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1009 18:21:58.308691   34792 command_runner.go:130] > # metrics_socket = ""
	I1009 18:21:58.308699   34792 command_runner.go:130] > # The certificate for the secure metrics server.
	I1009 18:21:58.308713   34792 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1009 18:21:58.308726   34792 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1009 18:21:58.308736   34792 command_runner.go:130] > # certificate on any modification event.
	I1009 18:21:58.308743   34792 command_runner.go:130] > # metrics_cert = ""
	I1009 18:21:58.308754   34792 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1009 18:21:58.308765   34792 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1009 18:21:58.308774   34792 command_runner.go:130] > # metrics_key = ""
	I1009 18:21:58.308785   34792 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1009 18:21:58.308793   34792 command_runner.go:130] > [crio.tracing]
	I1009 18:21:58.308803   34792 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1009 18:21:58.308812   34792 command_runner.go:130] > # enable_tracing = false
	I1009 18:21:58.308821   34792 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1009 18:21:58.308831   34792 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1009 18:21:58.308842   34792 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1009 18:21:58.308854   34792 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1009 18:21:58.308864   34792 command_runner.go:130] > # CRI-O NRI configuration.
	I1009 18:21:58.308871   34792 command_runner.go:130] > [crio.nri]
	I1009 18:21:58.308879   34792 command_runner.go:130] > # Globally enable or disable NRI.
	I1009 18:21:58.308888   34792 command_runner.go:130] > # enable_nri = true
	I1009 18:21:58.308908   34792 command_runner.go:130] > # NRI socket to listen on.
	I1009 18:21:58.308919   34792 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1009 18:21:58.308926   34792 command_runner.go:130] > # NRI plugin directory to use.
	I1009 18:21:58.308934   34792 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1009 18:21:58.308945   34792 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1009 18:21:58.308955   34792 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1009 18:21:58.308967   34792 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1009 18:21:58.309020   34792 command_runner.go:130] > # nri_disable_connections = false
	I1009 18:21:58.309031   34792 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1009 18:21:58.309039   34792 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1009 18:21:58.309050   34792 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1009 18:21:58.309060   34792 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1009 18:21:58.309070   34792 command_runner.go:130] > # NRI default validator configuration.
	I1009 18:21:58.309081   34792 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1009 18:21:58.309094   34792 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1009 18:21:58.309105   34792 command_runner.go:130] > # can be restricted/rejected:
	I1009 18:21:58.309114   34792 command_runner.go:130] > # - OCI hook injection
	I1009 18:21:58.309123   34792 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1009 18:21:58.309144   34792 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1009 18:21:58.309154   34792 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1009 18:21:58.309164   34792 command_runner.go:130] > # - adjustment of linux namespaces
	I1009 18:21:58.309174   34792 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1009 18:21:58.309187   34792 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1009 18:21:58.309199   34792 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1009 18:21:58.309206   34792 command_runner.go:130] > #
	I1009 18:21:58.309213   34792 command_runner.go:130] > # [crio.nri.default_validator]
	I1009 18:21:58.309228   34792 command_runner.go:130] > # nri_enable_default_validator = false
	I1009 18:21:58.309239   34792 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1009 18:21:58.309249   34792 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1009 18:21:58.309259   34792 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1009 18:21:58.309270   34792 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1009 18:21:58.309282   34792 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1009 18:21:58.309292   34792 command_runner.go:130] > # nri_validator_required_plugins = [
	I1009 18:21:58.309300   34792 command_runner.go:130] > # ]
	I1009 18:21:58.309310   34792 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1009 18:21:58.309320   34792 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1009 18:21:58.309329   34792 command_runner.go:130] > [crio.stats]
	I1009 18:21:58.309338   34792 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1009 18:21:58.309350   34792 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1009 18:21:58.309361   34792 command_runner.go:130] > # stats_collection_period = 0
	I1009 18:21:58.309373   34792 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1009 18:21:58.309386   34792 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1009 18:21:58.309395   34792 command_runner.go:130] > # collection_period = 0
	I1009 18:21:58.309439   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.287848676Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1009 18:21:58.309455   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.287874416Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1009 18:21:58.309486   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.28789246Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1009 18:21:58.309504   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.287909281Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1009 18:21:58.309520   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.287966347Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:58.309548   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.288147535Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1009 18:21:58.309568   34792 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1009 18:21:58.309652   34792 cni.go:84] Creating CNI manager for ""
	I1009 18:21:58.309667   34792 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:21:58.309686   34792 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:21:58.309718   34792 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753440 NodeName:functional-753440 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:21:58.309867   34792 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-753440"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:21:58.309941   34792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:21:58.317943   34792 command_runner.go:130] > kubeadm
	I1009 18:21:58.317964   34792 command_runner.go:130] > kubectl
	I1009 18:21:58.317972   34792 command_runner.go:130] > kubelet
	I1009 18:21:58.317992   34792 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:21:58.318041   34792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:21:58.325700   34792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 18:21:58.338455   34792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:21:58.350701   34792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1009 18:21:58.362930   34792 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 18:21:58.366724   34792 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1009 18:21:58.366809   34792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:21:58.451602   34792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:21:58.464478   34792 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440 for IP: 192.168.49.2
	I1009 18:21:58.464503   34792 certs.go:195] generating shared ca certs ...
	I1009 18:21:58.464518   34792 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:21:58.464657   34792 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 18:21:58.464699   34792 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 18:21:58.464708   34792 certs.go:257] generating profile certs ...
	I1009 18:21:58.464789   34792 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.key
	I1009 18:21:58.464832   34792 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.key.01289d3a
	I1009 18:21:58.464870   34792 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.key
	I1009 18:21:58.464880   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 18:21:58.464891   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 18:21:58.464904   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 18:21:58.464914   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 18:21:58.464926   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 18:21:58.464938   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 18:21:58.464950   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 18:21:58.464961   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 18:21:58.465007   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 18:21:58.465033   34792 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 18:21:58.465040   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:21:58.465060   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 18:21:58.465083   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:21:58.465117   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 18:21:58.465182   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:21:58.465212   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem -> /usr/share/ca-certificates/14880.pem
	I1009 18:21:58.465226   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /usr/share/ca-certificates/148802.pem
	I1009 18:21:58.465252   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:21:58.465730   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:21:58.483386   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:21:58.500383   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:21:58.517315   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:21:58.533903   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 18:21:58.550845   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:21:58.567242   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:21:58.584667   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:21:58.601626   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 18:21:58.618749   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 18:21:58.635789   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:21:58.652270   34792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:21:58.664508   34792 ssh_runner.go:195] Run: openssl version
	I1009 18:21:58.670569   34792 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1009 18:21:58.670643   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:21:58.679189   34792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:21:58.683037   34792 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:21:58.683067   34792 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:21:58.683111   34792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:21:58.716325   34792 command_runner.go:130] > b5213941
	I1009 18:21:58.716574   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:21:58.724647   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 18:21:58.732750   34792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 18:21:58.736237   34792 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:21:58.736342   34792 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:21:58.736392   34792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 18:21:58.769488   34792 command_runner.go:130] > 51391683
	I1009 18:21:58.769675   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 18:21:58.778213   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 18:21:58.786758   34792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 18:21:58.790431   34792 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:21:58.790472   34792 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:21:58.790516   34792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 18:21:58.824579   34792 command_runner.go:130] > 3ec20f2e
	I1009 18:21:58.824670   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:21:58.832975   34792 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:21:58.836722   34792 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:21:58.836745   34792 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1009 18:21:58.836750   34792 command_runner.go:130] > Device: 8,1	Inode: 583629      Links: 1
	I1009 18:21:58.836756   34792 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 18:21:58.836762   34792 command_runner.go:130] > Access: 2025-10-09 18:17:52.024667536 +0000
	I1009 18:21:58.836766   34792 command_runner.go:130] > Modify: 2025-10-09 18:13:46.346674317 +0000
	I1009 18:21:58.836771   34792 command_runner.go:130] > Change: 2025-10-09 18:13:46.346674317 +0000
	I1009 18:21:58.836775   34792 command_runner.go:130] >  Birth: 2025-10-09 18:13:46.346674317 +0000
	I1009 18:21:58.836829   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 18:21:58.871297   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:58.871384   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 18:21:58.905951   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:58.906293   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 18:21:58.941072   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:58.941180   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 18:21:58.975637   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:58.975713   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 18:21:59.010686   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:59.010763   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 18:21:59.045288   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:59.045372   34792 kubeadm.go:400] StartCluster: {Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:21:59.045468   34792 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:21:59.045548   34792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:21:59.072734   34792 cri.go:89] found id: ""
	I1009 18:21:59.072811   34792 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:21:59.080291   34792 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1009 18:21:59.080312   34792 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1009 18:21:59.080317   34792 command_runner.go:130] > /var/lib/minikube/etcd:
	I1009 18:21:59.080960   34792 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 18:21:59.080977   34792 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 18:21:59.081028   34792 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 18:21:59.088791   34792 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:21:59.088891   34792 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-753440" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:21:59.088923   34792 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-11374/kubeconfig needs updating (will repair): [kubeconfig missing "functional-753440" cluster setting kubeconfig missing "functional-753440" context setting]
	I1009 18:21:59.089226   34792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/kubeconfig: {Name:mke7bf8fc0811179129dfd61e3a963860adf8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:21:59.115972   34792 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:21:59.116113   34792 kapi.go:59] client config for functional-753440: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 18:21:59.116551   34792 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 18:21:59.116565   34792 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 18:21:59.116570   34792 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 18:21:59.116574   34792 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 18:21:59.116578   34792 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 18:21:59.116681   34792 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 18:21:59.116939   34792 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 18:21:59.125251   34792 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 18:21:59.125284   34792 kubeadm.go:601] duration metric: took 44.302105ms to restartPrimaryControlPlane
	I1009 18:21:59.125294   34792 kubeadm.go:402] duration metric: took 79.928873ms to StartCluster
	I1009 18:21:59.125313   34792 settings.go:142] acquiring lock: {Name:mke1fc24bd3c282bdce5b5999d4611ed242ead0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:21:59.125417   34792 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:21:59.125977   34792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/kubeconfig: {Name:mke7bf8fc0811179129dfd61e3a963860adf8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:21:59.126266   34792 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:21:59.126330   34792 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 18:21:59.126472   34792 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:21:59.126485   34792 addons.go:69] Setting default-storageclass=true in profile "functional-753440"
	I1009 18:21:59.126503   34792 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-753440"
	I1009 18:21:59.126475   34792 addons.go:69] Setting storage-provisioner=true in profile "functional-753440"
	I1009 18:21:59.126533   34792 addons.go:238] Setting addon storage-provisioner=true in "functional-753440"
	I1009 18:21:59.126575   34792 host.go:66] Checking if "functional-753440" exists ...
	I1009 18:21:59.126787   34792 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
	I1009 18:21:59.126953   34792 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
	I1009 18:21:59.129433   34792 out.go:179] * Verifying Kubernetes components...
	I1009 18:21:59.130694   34792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:21:59.147348   34792 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:21:59.147489   34792 kapi.go:59] client config for functional-753440: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 18:21:59.147681   34792 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 18:21:59.147763   34792 addons.go:238] Setting addon default-storageclass=true in "functional-753440"
	I1009 18:21:59.147799   34792 host.go:66] Checking if "functional-753440" exists ...
	I1009 18:21:59.148103   34792 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
	I1009 18:21:59.149131   34792 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:21:59.149169   34792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 18:21:59.149223   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:59.172020   34792 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 18:21:59.172047   34792 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 18:21:59.172108   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:59.172953   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:59.190936   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:59.227445   34792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:21:59.240811   34792 node_ready.go:35] waiting up to 6m0s for node "functional-753440" to be "Ready" ...
	I1009 18:21:59.240954   34792 type.go:168] "Request Body" body=""
	I1009 18:21:59.241028   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:21:59.241430   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:21:59.284375   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:21:59.300190   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:21:59.338559   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.338609   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.338653   34792 retry.go:31] will retry after 183.514108ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.353053   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.353121   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.353157   34792 retry.go:31] will retry after 252.751171ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.522422   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:21:59.573424   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.575988   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.576058   34792 retry.go:31] will retry after 293.779687ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.606194   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:21:59.660438   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.660484   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.660501   34792 retry.go:31] will retry after 279.387954ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.741722   34792 type.go:168] "Request Body" body=""
	I1009 18:21:59.741829   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:21:59.742206   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:21:59.870497   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:21:59.921333   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.923563   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.923589   34792 retry.go:31] will retry after 737.997993ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.940822   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:21:59.989898   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.992209   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.992239   34792 retry.go:31] will retry after 533.533276ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:00.241740   34792 type.go:168] "Request Body" body=""
	I1009 18:22:00.241807   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:00.242177   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:00.526746   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:00.575738   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:00.578103   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:00.578131   34792 retry.go:31] will retry after 930.387704ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:00.662455   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:00.715389   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:00.715427   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:00.715452   34792 retry.go:31] will retry after 867.874306ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:00.741572   34792 type.go:168] "Request Body" body=""
	I1009 18:22:00.741637   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:00.741979   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:01.241687   34792 type.go:168] "Request Body" body=""
	I1009 18:22:01.241751   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:01.242091   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:01.242159   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:01.509541   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:01.558188   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:01.560577   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:01.560605   34792 retry.go:31] will retry after 1.199996419s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:01.583824   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:01.634758   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:01.634811   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:01.634834   34792 retry.go:31] will retry after 674.661756ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:01.741022   34792 type.go:168] "Request Body" body=""
	I1009 18:22:01.741106   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:01.741428   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:02.241242   34792 type.go:168] "Request Body" body=""
	I1009 18:22:02.241329   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:02.241689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:02.309923   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:02.359167   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:02.361481   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:02.361513   34792 retry.go:31] will retry after 1.255051156s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:02.741014   34792 type.go:168] "Request Body" body=""
	I1009 18:22:02.741086   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:02.741469   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:02.761694   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:02.809418   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:02.811709   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:02.811735   34792 retry.go:31] will retry after 2.010356843s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:03.241312   34792 type.go:168] "Request Body" body=""
	I1009 18:22:03.241377   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:03.241665   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:03.617237   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:03.670575   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:03.670619   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:03.670643   34792 retry.go:31] will retry after 3.029315393s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:03.741894   34792 type.go:168] "Request Body" body=""
	I1009 18:22:03.741959   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:03.742307   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:03.742368   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:04.241167   34792 type.go:168] "Request Body" body=""
	I1009 18:22:04.241255   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:04.241616   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:04.741405   34792 type.go:168] "Request Body" body=""
	I1009 18:22:04.741470   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:04.741793   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:04.823125   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:04.874252   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:04.876942   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:04.876977   34792 retry.go:31] will retry after 2.337146666s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:05.241523   34792 type.go:168] "Request Body" body=""
	I1009 18:22:05.241603   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:05.241925   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:05.741876   34792 type.go:168] "Request Body" body=""
	I1009 18:22:05.741944   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:05.742306   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:06.241056   34792 type.go:168] "Request Body" body=""
	I1009 18:22:06.241120   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:06.241524   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:06.241591   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:06.701185   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:06.741960   34792 type.go:168] "Request Body" body=""
	I1009 18:22:06.742030   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:06.742348   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:06.753588   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:06.753625   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:06.753645   34792 retry.go:31] will retry after 5.067292314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:07.214286   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:07.241989   34792 type.go:168] "Request Body" body=""
	I1009 18:22:07.242085   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:07.242465   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:07.267576   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:07.267619   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:07.267638   34792 retry.go:31] will retry after 3.639407023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:07.741211   34792 type.go:168] "Request Body" body=""
	I1009 18:22:07.741279   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:07.741611   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:08.241376   34792 type.go:168] "Request Body" body=""
	I1009 18:22:08.241468   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:08.241797   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:08.241859   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:08.741654   34792 type.go:168] "Request Body" body=""
	I1009 18:22:08.741723   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:08.742130   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:09.241911   34792 type.go:168] "Request Body" body=""
	I1009 18:22:09.241978   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:09.242356   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:09.742012   34792 type.go:168] "Request Body" body=""
	I1009 18:22:09.742100   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:09.742487   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:10.241171   34792 type.go:168] "Request Body" body=""
	I1009 18:22:10.241238   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:10.241608   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:10.741552   34792 type.go:168] "Request Body" body=""
	I1009 18:22:10.741634   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:10.741987   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:10.742077   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:10.907343   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:10.958356   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:10.960749   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:10.960774   34792 retry.go:31] will retry after 7.184910667s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:11.241202   34792 type.go:168] "Request Body" body=""
	I1009 18:22:11.241304   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:11.241646   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:11.741253   34792 type.go:168] "Request Body" body=""
	I1009 18:22:11.741393   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:11.741703   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:11.821955   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:11.870785   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:11.873227   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:11.873260   34792 retry.go:31] will retry after 9.534535371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:12.241850   34792 type.go:168] "Request Body" body=""
	I1009 18:22:12.241915   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:12.242244   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:12.741040   34792 type.go:168] "Request Body" body=""
	I1009 18:22:12.741121   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:12.741476   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:13.241242   34792 type.go:168] "Request Body" body=""
	I1009 18:22:13.241344   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:13.241681   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:13.241752   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:13.741448   34792 type.go:168] "Request Body" body=""
	I1009 18:22:13.741557   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:13.741881   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:14.241703   34792 type.go:168] "Request Body" body=""
	I1009 18:22:14.241767   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:14.242071   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:14.741971   34792 type.go:168] "Request Body" body=""
	I1009 18:22:14.742058   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:14.742415   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:15.241162   34792 type.go:168] "Request Body" body=""
	I1009 18:22:15.241227   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:15.241543   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:15.741329   34792 type.go:168] "Request Body" body=""
	I1009 18:22:15.741396   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:15.741713   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:15.741779   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:16.241461   34792 type.go:168] "Request Body" body=""
	I1009 18:22:16.241527   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:16.241841   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:16.741694   34792 type.go:168] "Request Body" body=""
	I1009 18:22:16.741756   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:16.742072   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:17.241938   34792 type.go:168] "Request Body" body=""
	I1009 18:22:17.242012   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:17.242354   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:17.741119   34792 type.go:168] "Request Body" body=""
	I1009 18:22:17.741209   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:17.741520   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:18.146014   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:18.197672   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:18.200076   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:18.200108   34792 retry.go:31] will retry after 13.416592948s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:18.241338   34792 type.go:168] "Request Body" body=""
	I1009 18:22:18.241421   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:18.241742   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:18.241815   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:18.741635   34792 type.go:168] "Request Body" body=""
	I1009 18:22:18.741716   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:18.742048   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:19.241915   34792 type.go:168] "Request Body" body=""
	I1009 18:22:19.241986   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:19.242351   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:19.741113   34792 type.go:168] "Request Body" body=""
	I1009 18:22:19.741223   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:19.741558   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:20.241266   34792 type.go:168] "Request Body" body=""
	I1009 18:22:20.241372   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:20.241689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:20.741538   34792 type.go:168] "Request Body" body=""
	I1009 18:22:20.741648   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:20.742078   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:20.742168   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:21.241982   34792 type.go:168] "Request Body" body=""
	I1009 18:22:21.242072   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:21.242428   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:21.408800   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:21.460386   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:21.460443   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:21.460465   34792 retry.go:31] will retry after 6.196258431s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:21.741894   34792 type.go:168] "Request Body" body=""
	I1009 18:22:21.741973   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:21.742340   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:22.241109   34792 type.go:168] "Request Body" body=""
	I1009 18:22:22.241216   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:22.241540   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:22.741267   34792 type.go:168] "Request Body" body=""
	I1009 18:22:22.741362   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:22.741668   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:23.241400   34792 type.go:168] "Request Body" body=""
	I1009 18:22:23.241466   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:23.241777   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:23.241839   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:23.741636   34792 type.go:168] "Request Body" body=""
	I1009 18:22:23.741720   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:23.742032   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:24.241849   34792 type.go:168] "Request Body" body=""
	I1009 18:22:24.241912   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:24.242229   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:24.740969   34792 type.go:168] "Request Body" body=""
	I1009 18:22:24.741034   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:24.741359   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:25.241097   34792 type.go:168] "Request Body" body=""
	I1009 18:22:25.241186   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:25.241506   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:25.741317   34792 type.go:168] "Request Body" body=""
	I1009 18:22:25.741384   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:25.741717   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:25.741785   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:26.241467   34792 type.go:168] "Request Body" body=""
	I1009 18:22:26.241530   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:26.241836   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:26.741641   34792 type.go:168] "Request Body" body=""
	I1009 18:22:26.741717   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:26.742054   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:27.241867   34792 type.go:168] "Request Body" body=""
	I1009 18:22:27.241935   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:27.242289   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:27.657912   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:27.709732   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:27.709776   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:27.709796   34792 retry.go:31] will retry after 21.104663041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:27.741976   34792 type.go:168] "Request Body" body=""
	I1009 18:22:27.742060   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:27.742387   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:27.742447   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:28.241206   34792 type.go:168] "Request Body" body=""
	I1009 18:22:28.241272   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:28.241641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:28.741374   34792 type.go:168] "Request Body" body=""
	I1009 18:22:28.741445   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:28.741741   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:29.241532   34792 type.go:168] "Request Body" body=""
	I1009 18:22:29.241600   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:29.241930   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:29.741720   34792 type.go:168] "Request Body" body=""
	I1009 18:22:29.741782   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:29.742115   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:30.241968   34792 type.go:168] "Request Body" body=""
	I1009 18:22:30.242038   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:30.242354   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:30.242406   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:30.741168   34792 type.go:168] "Request Body" body=""
	I1009 18:22:30.741235   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:30.741522   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:31.241253   34792 type.go:168] "Request Body" body=""
	I1009 18:22:31.241332   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:31.241693   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:31.617269   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:31.669784   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:31.669834   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:31.669851   34792 retry.go:31] will retry after 15.154475243s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:31.740998   34792 type.go:168] "Request Body" body=""
	I1009 18:22:31.741063   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:31.741420   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:32.241118   34792 type.go:168] "Request Body" body=""
	I1009 18:22:32.241207   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:32.241526   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:32.741162   34792 type.go:168] "Request Body" body=""
	I1009 18:22:32.741230   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:32.741578   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:32.741636   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:33.241206   34792 type.go:168] "Request Body" body=""
	I1009 18:22:33.241273   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:33.241600   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:33.741209   34792 type.go:168] "Request Body" body=""
	I1009 18:22:33.741274   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:33.741593   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:34.241252   34792 type.go:168] "Request Body" body=""
	I1009 18:22:34.241319   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:34.241629   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:34.741297   34792 type.go:168] "Request Body" body=""
	I1009 18:22:34.741366   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:34.741662   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:34.741714   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:35.241258   34792 type.go:168] "Request Body" body=""
	I1009 18:22:35.241319   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:35.241631   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:35.741518   34792 type.go:168] "Request Body" body=""
	I1009 18:22:35.741590   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:35.741908   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:36.241473   34792 type.go:168] "Request Body" body=""
	I1009 18:22:36.241537   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:36.241867   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:36.741507   34792 type.go:168] "Request Body" body=""
	I1009 18:22:36.741582   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:36.741900   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:36.741954   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:37.241503   34792 type.go:168] "Request Body" body=""
	I1009 18:22:37.241570   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:37.241880   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:37.741492   34792 type.go:168] "Request Body" body=""
	I1009 18:22:37.741564   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:37.741883   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:38.241508   34792 type.go:168] "Request Body" body=""
	I1009 18:22:38.241573   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:38.241878   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:38.741474   34792 type.go:168] "Request Body" body=""
	I1009 18:22:38.741571   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:38.741868   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:39.241856   34792 type.go:168] "Request Body" body=""
	I1009 18:22:39.241916   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:39.242237   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:39.242300   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:39.741898   34792 type.go:168] "Request Body" body=""
	I1009 18:22:39.741969   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:39.742303   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:40.241969   34792 type.go:168] "Request Body" body=""
	I1009 18:22:40.242062   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:40.242400   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:40.741170   34792 type.go:168] "Request Body" body=""
	I1009 18:22:40.741238   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:40.741556   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:41.241169   34792 type.go:168] "Request Body" body=""
	I1009 18:22:41.241235   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:41.241568   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:41.741187   34792 type.go:168] "Request Body" body=""
	I1009 18:22:41.741253   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:41.741589   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:41.741643   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:42.241206   34792 type.go:168] "Request Body" body=""
	I1009 18:22:42.241272   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:42.241611   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:42.741205   34792 type.go:168] "Request Body" body=""
	I1009 18:22:42.741278   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:42.741595   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:43.241190   34792 type.go:168] "Request Body" body=""
	I1009 18:22:43.241258   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:43.241582   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:43.741198   34792 type.go:168] "Request Body" body=""
	I1009 18:22:43.741263   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:43.741575   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:44.241202   34792 type.go:168] "Request Body" body=""
	I1009 18:22:44.241263   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:44.241577   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:44.241629   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:44.741212   34792 type.go:168] "Request Body" body=""
	I1009 18:22:44.741283   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:44.741598   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:45.241235   34792 type.go:168] "Request Body" body=""
	I1009 18:22:45.241301   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:45.241671   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:45.741562   34792 type.go:168] "Request Body" body=""
	I1009 18:22:45.741629   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:45.741942   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:46.241628   34792 type.go:168] "Request Body" body=""
	I1009 18:22:46.241692   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:46.241993   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:46.242063   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:46.741676   34792 type.go:168] "Request Body" body=""
	I1009 18:22:46.741745   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:46.742077   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:46.825331   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:46.875678   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:46.878302   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:46.878331   34792 retry.go:31] will retry after 24.753743157s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:47.241842   34792 type.go:168] "Request Body" body=""
	I1009 18:22:47.241915   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:47.242245   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:47.741025   34792 type.go:168] "Request Body" body=""
	I1009 18:22:47.741128   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:47.741463   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:48.241206   34792 type.go:168] "Request Body" body=""
	I1009 18:22:48.241284   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:48.241641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:48.741361   34792 type.go:168] "Request Body" body=""
	I1009 18:22:48.741434   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:48.741764   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:48.741814   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:48.815023   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:48.866903   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:48.866953   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:48.866975   34792 retry.go:31] will retry after 23.693621864s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:49.241681   34792 type.go:168] "Request Body" body=""
	I1009 18:22:49.241760   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:49.242189   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:49.741809   34792 type.go:168] "Request Body" body=""
	I1009 18:22:49.741872   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:49.742216   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:50.241969   34792 type.go:168] "Request Body" body=""
	I1009 18:22:50.242049   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:50.242406   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:50.741244   34792 type.go:168] "Request Body" body=""
	I1009 18:22:50.741312   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:50.741658   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:51.241250   34792 type.go:168] "Request Body" body=""
	I1009 18:22:51.241336   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:51.241653   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:51.241707   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:51.741250   34792 type.go:168] "Request Body" body=""
	I1009 18:22:51.741317   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:51.741731   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:52.241243   34792 type.go:168] "Request Body" body=""
	I1009 18:22:52.241341   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:52.241668   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:52.741254   34792 type.go:168] "Request Body" body=""
	I1009 18:22:52.741378   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:52.741687   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:53.241293   34792 type.go:168] "Request Body" body=""
	I1009 18:22:53.241355   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:53.241674   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:53.241725   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:53.741263   34792 type.go:168] "Request Body" body=""
	I1009 18:22:53.741330   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:53.741640   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:54.241249   34792 type.go:168] "Request Body" body=""
	I1009 18:22:54.241329   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:54.241652   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:54.741260   34792 type.go:168] "Request Body" body=""
	I1009 18:22:54.741337   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:54.741654   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:55.241278   34792 type.go:168] "Request Body" body=""
	I1009 18:22:55.241342   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:55.241675   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:55.741565   34792 type.go:168] "Request Body" body=""
	I1009 18:22:55.741632   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:55.741942   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:55.741993   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:56.241590   34792 type.go:168] "Request Body" body=""
	I1009 18:22:56.241657   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:56.241967   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:56.741618   34792 type.go:168] "Request Body" body=""
	I1009 18:22:56.741686   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:56.742001   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:57.241690   34792 type.go:168] "Request Body" body=""
	I1009 18:22:57.241747   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:57.242085   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:57.741794   34792 type.go:168] "Request Body" body=""
	I1009 18:22:57.741866   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:57.742231   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:57.742290   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:58.241896   34792 type.go:168] "Request Body" body=""
	I1009 18:22:58.241964   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:58.242341   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:58.740987   34792 type.go:168] "Request Body" body=""
	I1009 18:22:58.741057   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:58.741430   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:59.241270   34792 type.go:168] "Request Body" body=""
	I1009 18:22:59.241374   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:59.241705   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:59.741305   34792 type.go:168] "Request Body" body=""
	I1009 18:22:59.741378   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:59.741671   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:00.241318   34792 type.go:168] "Request Body" body=""
	I1009 18:23:00.241386   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:00.241730   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:00.241783   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:00.741584   34792 type.go:168] "Request Body" body=""
	I1009 18:23:00.741655   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:00.741970   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:01.241670   34792 type.go:168] "Request Body" body=""
	I1009 18:23:01.241740   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:01.242056   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:01.741725   34792 type.go:168] "Request Body" body=""
	I1009 18:23:01.741789   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:01.742109   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:02.241790   34792 type.go:168] "Request Body" body=""
	I1009 18:23:02.241853   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:02.242215   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:02.242270   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:02.741914   34792 type.go:168] "Request Body" body=""
	I1009 18:23:02.741984   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:02.742352   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:03.242008   34792 type.go:168] "Request Body" body=""
	I1009 18:23:03.242088   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:03.242455   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:03.741186   34792 type.go:168] "Request Body" body=""
	I1009 18:23:03.741250   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:03.741576   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:04.241269   34792 type.go:168] "Request Body" body=""
	I1009 18:23:04.241341   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:04.241673   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:04.741396   34792 type.go:168] "Request Body" body=""
	I1009 18:23:04.741460   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:04.741772   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:04.741828   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:05.241582   34792 type.go:168] "Request Body" body=""
	I1009 18:23:05.241646   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:05.241956   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:05.741882   34792 type.go:168] "Request Body" body=""
	I1009 18:23:05.741951   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:05.742320   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:06.241065   34792 type.go:168] "Request Body" body=""
	I1009 18:23:06.241173   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:06.241497   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:06.741232   34792 type.go:168] "Request Body" body=""
	I1009 18:23:06.741295   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:06.741640   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:07.241402   34792 type.go:168] "Request Body" body=""
	I1009 18:23:07.241487   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:07.241813   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:07.241865   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:07.741620   34792 type.go:168] "Request Body" body=""
	I1009 18:23:07.741692   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:07.742021   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:08.241855   34792 type.go:168] "Request Body" body=""
	I1009 18:23:08.241917   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:08.242226   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:08.741000   34792 type.go:168] "Request Body" body=""
	I1009 18:23:08.741070   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:08.741419   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:09.241169   34792 type.go:168] "Request Body" body=""
	I1009 18:23:09.241236   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:09.241556   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:09.741160   34792 type.go:168] "Request Body" body=""
	I1009 18:23:09.741223   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:09.741542   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:09.741611   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:10.241116   34792 type.go:168] "Request Body" body=""
	I1009 18:23:10.241215   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:10.241545   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:10.741472   34792 type.go:168] "Request Body" body=""
	I1009 18:23:10.741586   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:10.741912   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:11.241739   34792 type.go:168] "Request Body" body=""
	I1009 18:23:11.241829   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:11.242195   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:11.632645   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:23:11.684065   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:23:11.686606   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:23:11.686651   34792 retry.go:31] will retry after 43.228082894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:23:11.741902   34792 type.go:168] "Request Body" body=""
	I1009 18:23:11.741967   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:11.742335   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:11.742398   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:12.241111   34792 type.go:168] "Request Body" body=""
	I1009 18:23:12.241221   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:12.241543   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:12.560933   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:23:12.614798   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:23:12.614843   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:23:12.614940   34792 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 18:23:12.741072   34792 type.go:168] "Request Body" body=""
	I1009 18:23:12.741169   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:12.741484   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:13.241057   34792 type.go:168] "Request Body" body=""
	I1009 18:23:13.241192   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:13.241516   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:13.741110   34792 type.go:168] "Request Body" body=""
	I1009 18:23:13.741196   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:13.741493   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:14.241244   34792 type.go:168] "Request Body" body=""
	I1009 18:23:14.241314   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:14.241686   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:14.241738   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:14.741425   34792 type.go:168] "Request Body" body=""
	I1009 18:23:14.741488   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:14.741803   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:15.241603   34792 type.go:168] "Request Body" body=""
	I1009 18:23:15.241664   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:15.241993   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:15.741872   34792 type.go:168] "Request Body" body=""
	I1009 18:23:15.741942   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:15.742284   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:16.241004   34792 type.go:168] "Request Body" body=""
	I1009 18:23:16.241108   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:16.241472   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:16.741281   34792 type.go:168] "Request Body" body=""
	I1009 18:23:16.741357   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:16.741657   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:16.741710   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:17.241427   34792 type.go:168] "Request Body" body=""
	I1009 18:23:17.241489   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:17.241829   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:17.741674   34792 type.go:168] "Request Body" body=""
	I1009 18:23:17.741762   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:17.742082   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:18.241893   34792 type.go:168] "Request Body" body=""
	I1009 18:23:18.241965   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:18.242388   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:18.741175   34792 type.go:168] "Request Body" body=""
	I1009 18:23:18.741239   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:18.741553   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:19.241408   34792 type.go:168] "Request Body" body=""
	I1009 18:23:19.241483   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:19.241852   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:19.241908   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:19.741678   34792 type.go:168] "Request Body" body=""
	I1009 18:23:19.741745   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:19.742039   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:20.241909   34792 type.go:168] "Request Body" body=""
	I1009 18:23:20.241972   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:20.242406   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:20.741268   34792 type.go:168] "Request Body" body=""
	I1009 18:23:20.741334   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:20.741646   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:21.241394   34792 type.go:168] "Request Body" body=""
	I1009 18:23:21.241459   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:21.241801   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:21.741624   34792 type.go:168] "Request Body" body=""
	I1009 18:23:21.741688   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:21.741997   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:21.742063   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:22.241916   34792 type.go:168] "Request Body" body=""
	I1009 18:23:22.241978   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:22.242380   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:22.741197   34792 type.go:168] "Request Body" body=""
	I1009 18:23:22.741265   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:22.741575   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:23.241312   34792 type.go:168] "Request Body" body=""
	I1009 18:23:23.241382   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:23.241731   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:23.741463   34792 type.go:168] "Request Body" body=""
	I1009 18:23:23.741537   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:23.741848   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:24.241654   34792 type.go:168] "Request Body" body=""
	I1009 18:23:24.241717   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:24.242059   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:24.242125   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:24.741910   34792 type.go:168] "Request Body" body=""
	I1009 18:23:24.741982   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:24.742333   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:25.241063   34792 type.go:168] "Request Body" body=""
	I1009 18:23:25.241128   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:25.241505   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:25.741559   34792 type.go:168] "Request Body" body=""
	I1009 18:23:25.741626   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:25.741933   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:26.241874   34792 type.go:168] "Request Body" body=""
	I1009 18:23:26.241956   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:26.242332   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:26.242390   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:26.741061   34792 type.go:168] "Request Body" body=""
	I1009 18:23:26.741125   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:26.741525   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:27.241264   34792 type.go:168] "Request Body" body=""
	I1009 18:23:27.241334   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:27.241644   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:27.741375   34792 type.go:168] "Request Body" body=""
	I1009 18:23:27.741438   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:27.741748   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:28.241487   34792 type.go:168] "Request Body" body=""
	I1009 18:23:28.241553   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:28.241862   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:28.741699   34792 type.go:168] "Request Body" body=""
	I1009 18:23:28.741767   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:28.742072   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:28.742126   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:29.241949   34792 type.go:168] "Request Body" body=""
	I1009 18:23:29.242051   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:29.242384   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:29.741054   34792 type.go:168] "Request Body" body=""
	I1009 18:23:29.741120   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:29.741440   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:30.241213   34792 type.go:168] "Request Body" body=""
	I1009 18:23:30.241289   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:30.241596   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:30.741484   34792 type.go:168] "Request Body" body=""
	I1009 18:23:30.741560   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:30.741926   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:31.241778   34792 type.go:168] "Request Body" body=""
	I1009 18:23:31.241839   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:31.242174   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:31.242227   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:31.740976   34792 type.go:168] "Request Body" body=""
	I1009 18:23:31.741038   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:31.741384   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:32.241106   34792 type.go:168] "Request Body" body=""
	I1009 18:23:32.241215   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:32.241567   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:32.741260   34792 type.go:168] "Request Body" body=""
	I1009 18:23:32.741352   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:32.741640   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:33.241340   34792 type.go:168] "Request Body" body=""
	I1009 18:23:33.241406   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:33.241743   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:33.741456   34792 type.go:168] "Request Body" body=""
	I1009 18:23:33.741516   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:33.741808   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:33.741862   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:34.241631   34792 type.go:168] "Request Body" body=""
	I1009 18:23:34.241695   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:34.242060   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:34.741908   34792 type.go:168] "Request Body" body=""
	I1009 18:23:34.741974   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:34.742307   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:35.241044   34792 type.go:168] "Request Body" body=""
	I1009 18:23:35.241113   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:35.241458   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:35.741288   34792 type.go:168] "Request Body" body=""
	I1009 18:23:35.741356   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:35.741670   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:36.241422   34792 type.go:168] "Request Body" body=""
	I1009 18:23:36.241483   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:36.241820   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:36.241874   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:36.741640   34792 type.go:168] "Request Body" body=""
	I1009 18:23:36.741707   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:36.742009   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:37.241833   34792 type.go:168] "Request Body" body=""
	I1009 18:23:37.241903   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:37.242258   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:37.740969   34792 type.go:168] "Request Body" body=""
	I1009 18:23:37.741033   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:37.741371   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:38.241096   34792 type.go:168] "Request Body" body=""
	I1009 18:23:38.241188   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:38.241533   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:38.741254   34792 type.go:168] "Request Body" body=""
	I1009 18:23:38.741330   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:38.741616   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:38.741669   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:39.241545   34792 type.go:168] "Request Body" body=""
	I1009 18:23:39.241620   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:39.241961   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:39.741751   34792 type.go:168] "Request Body" body=""
	I1009 18:23:39.741816   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:39.742174   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:40.241991   34792 type.go:168] "Request Body" body=""
	I1009 18:23:40.242060   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:40.242448   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:40.741260   34792 type.go:168] "Request Body" body=""
	I1009 18:23:40.741326   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:40.741641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:40.741695   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:41.241401   34792 type.go:168] "Request Body" body=""
	I1009 18:23:41.241463   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:41.241842   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:41.741321   34792 type.go:168] "Request Body" body=""
	I1009 18:23:41.741396   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:41.741709   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:42.241467   34792 type.go:168] "Request Body" body=""
	I1009 18:23:42.241529   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:42.241897   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:42.741700   34792 type.go:168] "Request Body" body=""
	I1009 18:23:42.741768   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:42.742079   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:42.742160   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:43.241914   34792 type.go:168] "Request Body" body=""
	I1009 18:23:43.241973   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:43.242318   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:43.741093   34792 type.go:168] "Request Body" body=""
	I1009 18:23:43.741186   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:43.741513   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:44.241263   34792 type.go:168] "Request Body" body=""
	I1009 18:23:44.241346   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:44.241690   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:44.741269   34792 type.go:168] "Request Body" body=""
	I1009 18:23:44.741339   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:44.741649   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:45.241373   34792 type.go:168] "Request Body" body=""
	I1009 18:23:45.241435   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:45.241795   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:45.241846   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:45.741727   34792 type.go:168] "Request Body" body=""
	I1009 18:23:45.741791   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:45.742097   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:46.241926   34792 type.go:168] "Request Body" body=""
	I1009 18:23:46.241996   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:46.242356   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:46.741120   34792 type.go:168] "Request Body" body=""
	I1009 18:23:46.741209   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:46.741602   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:47.241322   34792 type.go:168] "Request Body" body=""
	I1009 18:23:47.241391   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:47.241768   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:47.741575   34792 type.go:168] "Request Body" body=""
	I1009 18:23:47.741638   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:47.741939   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:47.741988   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:48.241711   34792 type.go:168] "Request Body" body=""
	I1009 18:23:48.241771   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:48.242111   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:48.741933   34792 type.go:168] "Request Body" body=""
	I1009 18:23:48.742004   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:48.742339   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:49.241046   34792 type.go:168] "Request Body" body=""
	I1009 18:23:49.241123   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:49.241511   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:49.741243   34792 type.go:168] "Request Body" body=""
	I1009 18:23:49.741308   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:49.741638   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:50.241345   34792 type.go:168] "Request Body" body=""
	I1009 18:23:50.241408   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:50.241740   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:50.241790   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:50.741667   34792 type.go:168] "Request Body" body=""
	I1009 18:23:50.741736   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:50.742048   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:51.241420   34792 type.go:168] "Request Body" body=""
	I1009 18:23:51.241491   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:51.241828   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:51.741669   34792 type.go:168] "Request Body" body=""
	I1009 18:23:51.741742   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:51.742050   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:52.241911   34792 type.go:168] "Request Body" body=""
	I1009 18:23:52.241973   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:52.242345   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:52.242396   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:52.741096   34792 type.go:168] "Request Body" body=""
	I1009 18:23:52.741186   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:52.741495   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:53.241277   34792 type.go:168] "Request Body" body=""
	I1009 18:23:53.241348   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:53.241731   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:53.741468   34792 type.go:168] "Request Body" body=""
	I1009 18:23:53.741553   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:53.741866   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:54.241666   34792 type.go:168] "Request Body" body=""
	I1009 18:23:54.241732   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:54.242078   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:54.741932   34792 type.go:168] "Request Body" body=""
	I1009 18:23:54.741997   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:54.742359   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:54.742411   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:54.915717   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:23:54.969064   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:23:54.969123   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:23:54.969226   34792 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 18:23:54.971206   34792 out.go:179] * Enabled addons: 
	I1009 18:23:54.972204   34792 addons.go:514] duration metric: took 1m55.845883827s for enable addons: enabled=[]
	I1009 18:23:55.241550   34792 type.go:168] "Request Body" body=""
	I1009 18:23:55.241625   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:55.241961   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:55.741824   34792 type.go:168] "Request Body" body=""
	I1009 18:23:55.741904   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:55.742290   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:56.241973   34792 type.go:168] "Request Body" body=""
	I1009 18:23:56.242123   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:56.242483   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:56.741036   34792 type.go:168] "Request Body" body=""
	I1009 18:23:56.741152   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:56.741467   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:57.241090   34792 type.go:168] "Request Body" body=""
	I1009 18:23:57.241200   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:57.241560   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:57.241611   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:57.741252   34792 type.go:168] "Request Body" body=""
	I1009 18:23:57.741334   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:57.741629   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:58.241447   34792 type.go:168] "Request Body" body=""
	I1009 18:23:58.241725   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:58.242009   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:58.741244   34792 type.go:168] "Request Body" body=""
	I1009 18:23:58.741314   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:58.741649   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:59.241582   34792 type.go:168] "Request Body" body=""
	I1009 18:23:59.241664   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:59.241976   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:59.242029   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:59.741645   34792 type.go:168] "Request Body" body=""
	I1009 18:23:59.741711   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:59.742016   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:00.241679   34792 type.go:168] "Request Body" body=""
	I1009 18:24:00.241745   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:00.242104   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:00.741941   34792 type.go:168] "Request Body" body=""
	I1009 18:24:00.742015   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:00.742375   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:01.240979   34792 type.go:168] "Request Body" body=""
	I1009 18:24:01.241079   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:01.241446   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:01.741104   34792 type.go:168] "Request Body" body=""
	I1009 18:24:01.741198   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:01.741536   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:01.741587   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:02.241191   34792 type.go:168] "Request Body" body=""
	I1009 18:24:02.241259   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:02.241560   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:02.741155   34792 type.go:168] "Request Body" body=""
	I1009 18:24:02.741230   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:02.741560   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:03.241230   34792 type.go:168] "Request Body" body=""
	I1009 18:24:03.241291   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:03.241606   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:03.741234   34792 type.go:168] "Request Body" body=""
	I1009 18:24:03.741320   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:03.741610   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:03.741659   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:04.241477   34792 type.go:168] "Request Body" body=""
	I1009 18:24:04.241610   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:04.241994   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:04.741666   34792 type.go:168] "Request Body" body=""
	I1009 18:24:04.741733   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:04.742049   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:05.241727   34792 type.go:168] "Request Body" body=""
	I1009 18:24:05.241807   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:05.242113   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:05.741949   34792 type.go:168] "Request Body" body=""
	I1009 18:24:05.742014   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:05.742361   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:05.742412   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:06.240966   34792 type.go:168] "Request Body" body=""
	I1009 18:24:06.241087   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:06.241438   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:06.741043   34792 type.go:168] "Request Body" body=""
	I1009 18:24:06.741125   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:06.741482   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:07.241180   34792 type.go:168] "Request Body" body=""
	I1009 18:24:07.241242   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:07.241557   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:07.741167   34792 type.go:168] "Request Body" body=""
	I1009 18:24:07.741259   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:07.741613   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:08.241236   34792 type.go:168] "Request Body" body=""
	I1009 18:24:08.241302   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:08.241607   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:08.241657   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:08.741270   34792 type.go:168] "Request Body" body=""
	I1009 18:24:08.741337   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:08.741689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:09.241656   34792 type.go:168] "Request Body" body=""
	I1009 18:24:09.241721   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:09.242060   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:09.741758   34792 type.go:168] "Request Body" body=""
	I1009 18:24:09.741832   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:09.742204   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:10.241854   34792 type.go:168] "Request Body" body=""
	I1009 18:24:10.241948   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:10.242297   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:10.242356   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:10.740989   34792 type.go:168] "Request Body" body=""
	I1009 18:24:10.741064   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:10.741405   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:11.242008   34792 type.go:168] "Request Body" body=""
	I1009 18:24:11.242096   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:11.242414   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:11.741019   34792 type.go:168] "Request Body" body=""
	I1009 18:24:11.741090   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:11.741443   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:12.241051   34792 type.go:168] "Request Body" body=""
	I1009 18:24:12.241127   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:12.241488   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:12.741129   34792 type.go:168] "Request Body" body=""
	I1009 18:24:12.741226   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:12.741564   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:12.741614   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:13.241115   34792 type.go:168] "Request Body" body=""
	I1009 18:24:13.241208   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:13.241540   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:13.741171   34792 type.go:168] "Request Body" body=""
	I1009 18:24:13.741235   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:13.741549   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:14.241221   34792 type.go:168] "Request Body" body=""
	I1009 18:24:14.241289   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:14.241613   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:14.741228   34792 type.go:168] "Request Body" body=""
	I1009 18:24:14.741294   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:14.741619   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:14.741670   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:15.241203   34792 type.go:168] "Request Body" body=""
	I1009 18:24:15.241266   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:15.241587   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:15.741480   34792 type.go:168] "Request Body" body=""
	I1009 18:24:15.741544   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:15.741911   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:16.241491   34792 type.go:168] "Request Body" body=""
	I1009 18:24:16.241558   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:16.241870   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:16.741517   34792 type.go:168] "Request Body" body=""
	I1009 18:24:16.741585   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:16.741911   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:16.741963   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:17.241588   34792 type.go:168] "Request Body" body=""
	I1009 18:24:17.241650   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:17.241989   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:17.741644   34792 type.go:168] "Request Body" body=""
	I1009 18:24:17.741710   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:17.742011   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:18.241688   34792 type.go:168] "Request Body" body=""
	I1009 18:24:18.241755   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:18.242125   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:18.741790   34792 type.go:168] "Request Body" body=""
	I1009 18:24:18.741854   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:18.742223   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:18.742290   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:19.242039   34792 type.go:168] "Request Body" body=""
	I1009 18:24:19.242109   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:19.242472   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:19.741076   34792 type.go:168] "Request Body" body=""
	I1009 18:24:19.741162   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:19.741541   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:20.241117   34792 type.go:168] "Request Body" body=""
	I1009 18:24:20.241204   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:20.241525   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:20.741486   34792 type.go:168] "Request Body" body=""
	I1009 18:24:20.741556   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:20.741868   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:21.241426   34792 type.go:168] "Request Body" body=""
	I1009 18:24:21.241498   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:21.241806   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:21.241862   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:21.741431   34792 type.go:168] "Request Body" body=""
	I1009 18:24:21.741537   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:21.741868   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:22.241461   34792 type.go:168] "Request Body" body=""
	I1009 18:24:22.241535   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:22.241849   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:22.741438   34792 type.go:168] "Request Body" body=""
	I1009 18:24:22.741501   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:22.741846   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:23.241408   34792 type.go:168] "Request Body" body=""
	I1009 18:24:23.241477   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:23.241783   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:23.741400   34792 type.go:168] "Request Body" body=""
	I1009 18:24:23.741470   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:23.741789   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:23.741845   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:24.241359   34792 type.go:168] "Request Body" body=""
	I1009 18:24:24.241431   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:24.241755   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:24.741348   34792 type.go:168] "Request Body" body=""
	I1009 18:24:24.741408   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:24.741733   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:25.241293   34792 type.go:168] "Request Body" body=""
	I1009 18:24:25.241374   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:25.241694   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:25.741621   34792 type.go:168] "Request Body" body=""
	I1009 18:24:25.741682   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:25.742037   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:25.742088   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:26.241707   34792 type.go:168] "Request Body" body=""
	I1009 18:24:26.241774   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:26.242098   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:26.741808   34792 type.go:168] "Request Body" body=""
	I1009 18:24:26.741871   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:26.742236   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:27.241893   34792 type.go:168] "Request Body" body=""
	I1009 18:24:27.241957   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:27.242307   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:27.741971   34792 type.go:168] "Request Body" body=""
	I1009 18:24:27.742039   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:27.742363   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:27.742412   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:28.240944   34792 type.go:168] "Request Body" body=""
	I1009 18:24:28.241012   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:28.241383   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:28.740967   34792 type.go:168] "Request Body" body=""
	I1009 18:24:28.741047   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:28.741411   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:29.241219   34792 type.go:168] "Request Body" body=""
	I1009 18:24:29.241290   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:29.241653   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:29.741274   34792 type.go:168] "Request Body" body=""
	I1009 18:24:29.741345   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:29.741655   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:30.241249   34792 type.go:168] "Request Body" body=""
	I1009 18:24:30.241326   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:30.241636   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:30.241689   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:30.741565   34792 type.go:168] "Request Body" body=""
	I1009 18:24:30.741637   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:30.741952   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:31.241609   34792 type.go:168] "Request Body" body=""
	I1009 18:24:31.241669   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:31.242013   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:31.741661   34792 type.go:168] "Request Body" body=""
	I1009 18:24:31.741727   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:31.742040   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:32.241675   34792 type.go:168] "Request Body" body=""
	I1009 18:24:32.241739   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:32.242047   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:32.242100   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:32.741353   34792 type.go:168] "Request Body" body=""
	I1009 18:24:32.741425   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:32.741746   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:33.241341   34792 type.go:168] "Request Body" body=""
	I1009 18:24:33.241401   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:33.241718   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:33.741321   34792 type.go:168] "Request Body" body=""
	I1009 18:24:33.741388   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:33.741692   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:34.241262   34792 type.go:168] "Request Body" body=""
	I1009 18:24:34.241326   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:34.241641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:34.741266   34792 type.go:168] "Request Body" body=""
	I1009 18:24:34.741339   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:34.741686   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:34.741740   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:35.241256   34792 type.go:168] "Request Body" body=""
	I1009 18:24:35.241332   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:35.241644   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:35.741557   34792 type.go:168] "Request Body" body=""
	I1009 18:24:35.741623   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:35.741960   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:36.241631   34792 type.go:168] "Request Body" body=""
	I1009 18:24:36.241698   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:36.242094   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:36.741738   34792 type.go:168] "Request Body" body=""
	I1009 18:24:36.741810   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:36.742164   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:36.742232   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:37.241811   34792 type.go:168] "Request Body" body=""
	I1009 18:24:37.241879   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:37.242219   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:37.741906   34792 type.go:168] "Request Body" body=""
	I1009 18:24:37.741972   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:37.742360   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:38.241974   34792 type.go:168] "Request Body" body=""
	I1009 18:24:38.242032   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:38.242406   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:38.740970   34792 type.go:168] "Request Body" body=""
	I1009 18:24:38.741038   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:38.741400   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:39.241238   34792 type.go:168] "Request Body" body=""
	I1009 18:24:39.241302   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:39.241642   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:39.241695   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:39.741304   34792 type.go:168] "Request Body" body=""
	I1009 18:24:39.741370   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:39.741689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:40.241283   34792 type.go:168] "Request Body" body=""
	I1009 18:24:40.241349   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:40.241689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:40.741596   34792 type.go:168] "Request Body" body=""
	I1009 18:24:40.741665   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:40.741992   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:41.241775   34792 type.go:168] "Request Body" body=""
	I1009 18:24:41.241853   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:41.242210   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:41.242282   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:41.741904   34792 type.go:168] "Request Body" body=""
	I1009 18:24:41.741970   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:41.742352   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:42.240959   34792 type.go:168] "Request Body" body=""
	I1009 18:24:42.241085   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:42.241411   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:42.741000   34792 type.go:168] "Request Body" body=""
	I1009 18:24:42.741063   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:42.741398   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:43.242037   34792 type.go:168] "Request Body" body=""
	I1009 18:24:43.242129   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:43.242476   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:43.242528   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:43.741058   34792 type.go:168] "Request Body" body=""
	I1009 18:24:43.741124   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:43.741463   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:44.241058   34792 type.go:168] "Request Body" body=""
	I1009 18:24:44.241159   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:44.241499   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:44.741068   34792 type.go:168] "Request Body" body=""
	I1009 18:24:44.741159   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:44.741472   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:45.241073   34792 type.go:168] "Request Body" body=""
	I1009 18:24:45.241155   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:45.241482   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:45.741464   34792 type.go:168] "Request Body" body=""
	I1009 18:24:45.741533   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:45.741834   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:45.741888   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:46.241484   34792 type.go:168] "Request Body" body=""
	I1009 18:24:46.241552   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:46.241885   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:46.741462   34792 type.go:168] "Request Body" body=""
	I1009 18:24:46.741538   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:46.741838   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:47.241422   34792 type.go:168] "Request Body" body=""
	I1009 18:24:47.241483   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:47.241808   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:47.741360   34792 type.go:168] "Request Body" body=""
	I1009 18:24:47.741425   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:47.741734   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:48.241415   34792 type.go:168] "Request Body" body=""
	I1009 18:24:48.241480   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:48.241802   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:48.241867   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:48.741335   34792 type.go:168] "Request Body" body=""
	I1009 18:24:48.741399   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:48.741718   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:49.241753   34792 type.go:168] "Request Body" body=""
	I1009 18:24:49.241820   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:49.242187   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:49.741848   34792 type.go:168] "Request Body" body=""
	I1009 18:24:49.741914   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:49.742284   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:50.242049   34792 type.go:168] "Request Body" body=""
	I1009 18:24:50.242115   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:50.242449   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:50.242500   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:50.741086   34792 type.go:168] "Request Body" body=""
	I1009 18:24:50.741198   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:50.741527   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:51.241098   34792 type.go:168] "Request Body" body=""
	I1009 18:24:51.241186   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:51.241495   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:51.741082   34792 type.go:168] "Request Body" body=""
	I1009 18:24:51.741183   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:51.741522   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:52.241121   34792 type.go:168] "Request Body" body=""
	I1009 18:24:52.241212   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:52.241508   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:52.741094   34792 type.go:168] "Request Body" body=""
	I1009 18:24:52.741203   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:52.741514   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:52.741572   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:53.241090   34792 type.go:168] "Request Body" body=""
	I1009 18:24:53.241183   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:53.241580   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:53.741218   34792 type.go:168] "Request Body" body=""
	I1009 18:24:53.741300   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:53.741630   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:54.241270   34792 type.go:168] "Request Body" body=""
	I1009 18:24:54.241352   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:54.241658   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:54.741241   34792 type.go:168] "Request Body" body=""
	I1009 18:24:54.741321   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:54.741636   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:54.741687   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:55.241234   34792 type.go:168] "Request Body" body=""
	I1009 18:24:55.241306   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:55.241626   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:55.741410   34792 type.go:168] "Request Body" body=""
	I1009 18:24:55.741479   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:55.741852   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:56.241427   34792 type.go:168] "Request Body" body=""
	I1009 18:24:56.241491   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:56.241834   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:56.741423   34792 type.go:168] "Request Body" body=""
	I1009 18:24:56.741492   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:56.741854   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:56.741921   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:57.241419   34792 type.go:168] "Request Body" body=""
	I1009 18:24:57.241484   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:57.241784   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:57.741337   34792 type.go:168] "Request Body" body=""
	I1009 18:24:57.741402   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:57.741768   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:58.241353   34792 type.go:168] "Request Body" body=""
	I1009 18:24:58.241420   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:58.241723   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:58.741285   34792 type.go:168] "Request Body" body=""
	I1009 18:24:58.741356   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:58.741698   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:59.241536   34792 type.go:168] "Request Body" body=""
	I1009 18:24:59.241601   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:59.241906   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:59.241970   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:59.741466   34792 type.go:168] "Request Body" body=""
	I1009 18:24:59.741528   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:59.741866   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:00.241421   34792 type.go:168] "Request Body" body=""
	I1009 18:25:00.241487   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:00.241800   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:00.741667   34792 type.go:168] "Request Body" body=""
	I1009 18:25:00.741748   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:00.742076   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:01.241775   34792 type.go:168] "Request Body" body=""
	I1009 18:25:01.241841   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:01.242226   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:01.242284   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:01.741879   34792 type.go:168] "Request Body" body=""
	I1009 18:25:01.741957   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:01.742330   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:02.241978   34792 type.go:168] "Request Body" body=""
	I1009 18:25:02.242041   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:02.242423   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:02.741029   34792 type.go:168] "Request Body" body=""
	I1009 18:25:02.741115   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:02.741462   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:03.241086   34792 type.go:168] "Request Body" body=""
	I1009 18:25:03.241179   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:03.241501   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:03.741018   34792 type.go:168] "Request Body" body=""
	I1009 18:25:03.741114   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:03.741476   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:03.741528   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:04.241053   34792 type.go:168] "Request Body" body=""
	I1009 18:25:04.241116   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:04.241452   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:04.741007   34792 type.go:168] "Request Body" body=""
	I1009 18:25:04.741083   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:04.741445   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:05.241037   34792 type.go:168] "Request Body" body=""
	I1009 18:25:05.241100   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:05.241427   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:05.741247   34792 type.go:168] "Request Body" body=""
	I1009 18:25:05.741321   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:05.741697   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:05.741771   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:06.241254   34792 type.go:168] "Request Body" body=""
	I1009 18:25:06.241327   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:06.241639   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:06.741286   34792 type.go:168] "Request Body" body=""
	I1009 18:25:06.741366   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:06.741735   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:07.241253   34792 type.go:168] "Request Body" body=""
	I1009 18:25:07.241322   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:07.241625   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:07.741217   34792 type.go:168] "Request Body" body=""
	I1009 18:25:07.741279   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:07.741640   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:08.241244   34792 type.go:168] "Request Body" body=""
	I1009 18:25:08.241315   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:08.241647   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:08.241711   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:08.741241   34792 type.go:168] "Request Body" body=""
	I1009 18:25:08.741304   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:08.741686   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:09.241716   34792 type.go:168] "Request Body" body=""
	I1009 18:25:09.241782   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:09.242124   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:09.741814   34792 type.go:168] "Request Body" body=""
	I1009 18:25:09.741880   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:09.742241   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:10.241918   34792 type.go:168] "Request Body" body=""
	I1009 18:25:10.241983   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:10.242339   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:10.242405   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:10.741070   34792 type.go:168] "Request Body" body=""
	I1009 18:25:10.741194   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:10.741554   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:11.241213   34792 type.go:168] "Request Body" body=""
	I1009 18:25:11.241281   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:11.241588   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:11.741236   34792 type.go:168] "Request Body" body=""
	I1009 18:25:11.741322   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:11.741656   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:12.241283   34792 type.go:168] "Request Body" body=""
	I1009 18:25:12.241345   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:12.241648   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:12.741253   34792 type.go:168] "Request Body" body=""
	I1009 18:25:12.741341   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:12.741670   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:12.741727   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:13.241274   34792 type.go:168] "Request Body" body=""
	I1009 18:25:13.241352   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:13.241660   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:13.741258   34792 type.go:168] "Request Body" body=""
	I1009 18:25:13.741346   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:13.741679   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:14.241260   34792 type.go:168] "Request Body" body=""
	I1009 18:25:14.241333   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:14.241686   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:14.741277   34792 type.go:168] "Request Body" body=""
	I1009 18:25:14.741354   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:14.741682   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:15.241247   34792 type.go:168] "Request Body" body=""
	I1009 18:25:15.241309   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:15.241612   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:15.241669   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:15.741488   34792 type.go:168] "Request Body" body=""
	I1009 18:25:15.741552   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:15.741890   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:16.241468   34792 type.go:168] "Request Body" body=""
	I1009 18:25:16.241537   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:16.241842   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:16.741415   34792 type.go:168] "Request Body" body=""
	I1009 18:25:16.741480   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:16.741850   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:17.241442   34792 type.go:168] "Request Body" body=""
	I1009 18:25:17.241504   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:17.241800   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:17.241861   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:17.741344   34792 type.go:168] "Request Body" body=""
	I1009 18:25:17.741411   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:17.741764   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:18.241362   34792 type.go:168] "Request Body" body=""
	I1009 18:25:18.241432   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:18.241786   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:18.741325   34792 type.go:168] "Request Body" body=""
	I1009 18:25:18.741390   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:18.741723   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:19.241633   34792 type.go:168] "Request Body" body=""
	I1009 18:25:19.241702   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:19.242011   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:19.242081   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:19.741669   34792 type.go:168] "Request Body" body=""
	I1009 18:25:19.741733   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:19.742064   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:20.241763   34792 type.go:168] "Request Body" body=""
	I1009 18:25:20.241826   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:20.242186   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:20.742053   34792 type.go:168] "Request Body" body=""
	I1009 18:25:20.742131   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:20.742513   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:21.241071   34792 type.go:168] "Request Body" body=""
	I1009 18:25:21.241171   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:21.241504   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:21.741088   34792 type.go:168] "Request Body" body=""
	I1009 18:25:21.741207   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:21.741536   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:21.741594   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:22.241126   34792 type.go:168] "Request Body" body=""
	I1009 18:25:22.241221   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:22.241545   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:22.741131   34792 type.go:168] "Request Body" body=""
	I1009 18:25:22.741233   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:22.741588   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:23.241178   34792 type.go:168] "Request Body" body=""
	I1009 18:25:23.241242   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:23.241568   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:23.741162   34792 type.go:168] "Request Body" body=""
	I1009 18:25:23.741242   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:23.741577   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:23.741627   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:24.241178   34792 type.go:168] "Request Body" body=""
	I1009 18:25:24.241246   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:24.241578   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:24.741188   34792 type.go:168] "Request Body" body=""
	I1009 18:25:24.741295   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:24.741619   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:25.241208   34792 type.go:168] "Request Body" body=""
	I1009 18:25:25.241275   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:25.241641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:25.741538   34792 type.go:168] "Request Body" body=""
	I1009 18:25:25.741597   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:25.741905   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:25.741979   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:26.241464   34792 type.go:168] "Request Body" body=""
	I1009 18:25:26.241527   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:26.241835   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:26.741401   34792 type.go:168] "Request Body" body=""
	I1009 18:25:26.741467   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:26.741780   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:27.241351   34792 type.go:168] "Request Body" body=""
	I1009 18:25:27.241416   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:27.241723   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:27.741308   34792 type.go:168] "Request Body" body=""
	I1009 18:25:27.741383   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:27.741695   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:28.241262   34792 type.go:168] "Request Body" body=""
	I1009 18:25:28.241331   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:28.241634   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:28.241696   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:28.741253   34792 type.go:168] "Request Body" body=""
	I1009 18:25:28.741315   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:28.741626   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:29.241574   34792 type.go:168] "Request Body" body=""
	I1009 18:25:29.241643   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:29.241986   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:29.741657   34792 type.go:168] "Request Body" body=""
	I1009 18:25:29.741719   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:29.742063   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:30.241739   34792 type.go:168] "Request Body" body=""
	I1009 18:25:30.241804   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:30.242168   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:30.242230   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:30.741968   34792 type.go:168] "Request Body" body=""
	I1009 18:25:30.742100   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:30.742470   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:31.241076   34792 type.go:168] "Request Body" body=""
	I1009 18:25:31.241171   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:31.241532   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:31.741177   34792 type.go:168] "Request Body" body=""
	I1009 18:25:31.741282   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:31.741624   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:32.241262   34792 type.go:168] "Request Body" body=""
	I1009 18:25:32.241340   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:32.241670   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:32.741275   34792 type.go:168] "Request Body" body=""
	I1009 18:25:32.741360   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:32.741742   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:32.741796   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:33.241329   34792 type.go:168] "Request Body" body=""
	I1009 18:25:33.241396   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:33.241697   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:33.741289   34792 type.go:168] "Request Body" body=""
	I1009 18:25:33.741384   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:33.741759   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:34.241368   34792 type.go:168] "Request Body" body=""
	I1009 18:25:34.241439   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:34.241760   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:34.741351   34792 type.go:168] "Request Body" body=""
	I1009 18:25:34.741428   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:34.741798   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:34.741864   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:35.241399   34792 type.go:168] "Request Body" body=""
	I1009 18:25:35.241491   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:35.241838   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:35.741772   34792 type.go:168] "Request Body" body=""
	I1009 18:25:35.741836   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:35.742224   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:36.242003   34792 type.go:168] "Request Body" body=""
	I1009 18:25:36.242076   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:36.242435   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:36.741028   34792 type.go:168] "Request Body" body=""
	I1009 18:25:36.741097   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:36.741464   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:37.241121   34792 type.go:168] "Request Body" body=""
	I1009 18:25:37.241212   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:37.241551   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:37.241620   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:37.741109   34792 type.go:168] "Request Body" body=""
	I1009 18:25:37.741219   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:37.741567   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:38.241177   34792 type.go:168] "Request Body" body=""
	I1009 18:25:38.241246   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:38.241629   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:38.741262   34792 type.go:168] "Request Body" body=""
	I1009 18:25:38.741325   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:38.741654   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:39.241652   34792 type.go:168] "Request Body" body=""
	I1009 18:25:39.241726   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:39.242067   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:39.242125   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:39.741736   34792 type.go:168] "Request Body" body=""
	I1009 18:25:39.741806   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:39.742215   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:40.241891   34792 type.go:168] "Request Body" body=""
	I1009 18:25:40.241956   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:40.242334   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:40.741050   34792 type.go:168] "Request Body" body=""
	I1009 18:25:40.741121   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:40.741479   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:41.241091   34792 type.go:168] "Request Body" body=""
	I1009 18:25:41.241192   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:41.241525   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:41.741118   34792 type.go:168] "Request Body" body=""
	I1009 18:25:41.741208   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:41.741569   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:41.741626   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:42.241220   34792 type.go:168] "Request Body" body=""
	I1009 18:25:42.241296   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:42.241609   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:42.741251   34792 type.go:168] "Request Body" body=""
	I1009 18:25:42.741318   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:42.741643   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:43.241341   34792 type.go:168] "Request Body" body=""
	I1009 18:25:43.241412   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:43.241736   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:43.741353   34792 type.go:168] "Request Body" body=""
	I1009 18:25:43.741418   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:43.741732   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:43.741785   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:44.241361   34792 type.go:168] "Request Body" body=""
	I1009 18:25:44.241434   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:44.241757   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:44.741332   34792 type.go:168] "Request Body" body=""
	I1009 18:25:44.741401   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:44.741760   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:45.241363   34792 type.go:168] "Request Body" body=""
	I1009 18:25:45.241438   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:45.241819   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:45.741752   34792 type.go:168] "Request Body" body=""
	I1009 18:25:45.741826   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:45.742224   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:45.742282   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:46.241931   34792 type.go:168] "Request Body" body=""
	I1009 18:25:46.242008   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:46.242395   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:46.740984   34792 type.go:168] "Request Body" body=""
	I1009 18:25:46.741081   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:46.741473   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:47.241027   34792 type.go:168] "Request Body" body=""
	I1009 18:25:47.241148   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:47.241536   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:47.741035   34792 type.go:168] "Request Body" body=""
	I1009 18:25:47.741101   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:47.741554   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:48.241082   34792 type.go:168] "Request Body" body=""
	I1009 18:25:48.241179   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:48.241496   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:48.241548   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:48.741082   34792 type.go:168] "Request Body" body=""
	I1009 18:25:48.741203   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:48.741562   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:49.241540   34792 type.go:168] "Request Body" body=""
	I1009 18:25:49.241609   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:49.241992   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:49.741668   34792 type.go:168] "Request Body" body=""
	I1009 18:25:49.741737   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:49.742062   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:50.241713   34792 type.go:168] "Request Body" body=""
	I1009 18:25:50.241779   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:50.242089   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:50.242165   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:50.741969   34792 type.go:168] "Request Body" body=""
	I1009 18:25:50.742080   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:50.742425   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:51.241055   34792 type.go:168] "Request Body" body=""
	I1009 18:25:51.241121   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:51.241485   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:51.741082   34792 type.go:168] "Request Body" body=""
	I1009 18:25:51.741170   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:51.741493   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:52.241115   34792 type.go:168] "Request Body" body=""
	I1009 18:25:52.241209   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:52.241541   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:52.741234   34792 type.go:168] "Request Body" body=""
	I1009 18:25:52.741307   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:52.741661   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:52.741713   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:53.241239   34792 type.go:168] "Request Body" body=""
	I1009 18:25:53.241326   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:53.241653   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:53.741250   34792 type.go:168] "Request Body" body=""
	I1009 18:25:53.741330   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:53.741655   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:54.241252   34792 type.go:168] "Request Body" body=""
	I1009 18:25:54.241357   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:54.241717   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:54.741298   34792 type.go:168] "Request Body" body=""
	I1009 18:25:54.741362   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:54.741680   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:54.741732   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:55.241249   34792 type.go:168] "Request Body" body=""
	I1009 18:25:55.241310   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:55.241707   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:55.741639   34792 type.go:168] "Request Body" body=""
	I1009 18:25:55.741703   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:55.742036   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:56.241666   34792 type.go:168] "Request Body" body=""
	I1009 18:25:56.241729   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:56.242065   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:56.741838   34792 type.go:168] "Request Body" body=""
	I1009 18:25:56.741901   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:56.742249   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:56.742310   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:57.241936   34792 type.go:168] "Request Body" body=""
	I1009 18:25:57.242047   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:57.242403   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:57.741073   34792 type.go:168] "Request Body" body=""
	I1009 18:25:57.741156   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:57.741453   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:58.241102   34792 type.go:168] "Request Body" body=""
	I1009 18:25:58.241189   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:58.241532   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:58.741625   34792 type.go:168] "Request Body" body=""
	I1009 18:25:58.741731   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:58.742069   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:59.241918   34792 type.go:168] "Request Body" body=""
	I1009 18:25:59.242002   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:59.242382   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:59.242433   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:59.741586   34792 type.go:168] "Request Body" body=""
	I1009 18:25:59.741680   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:59.742047   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:00.241712   34792 type.go:168] "Request Body" body=""
	I1009 18:26:00.241778   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:00.242123   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:00.741944   34792 type.go:168] "Request Body" body=""
	I1009 18:26:00.742006   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:00.742335   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:01.241998   34792 type.go:168] "Request Body" body=""
	I1009 18:26:01.242063   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:01.242409   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:01.242463   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:01.740980   34792 type.go:168] "Request Body" body=""
	I1009 18:26:01.741043   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:01.741380   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:02.240968   34792 type.go:168] "Request Body" body=""
	I1009 18:26:02.241034   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:02.241387   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:02.740965   34792 type.go:168] "Request Body" body=""
	I1009 18:26:02.741036   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:02.741361   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:03.241979   34792 type.go:168] "Request Body" body=""
	I1009 18:26:03.242041   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:03.242370   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:03.740968   34792 type.go:168] "Request Body" body=""
	I1009 18:26:03.741033   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:03.741362   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:03.741412   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:04.242040   34792 type.go:168] "Request Body" body=""
	I1009 18:26:04.242108   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:04.242468   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:04.741070   34792 type.go:168] "Request Body" body=""
	I1009 18:26:04.741158   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:04.741484   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:05.241044   34792 type.go:168] "Request Body" body=""
	I1009 18:26:05.241107   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:05.241461   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:05.741242   34792 type.go:168] "Request Body" body=""
	I1009 18:26:05.741305   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:05.741627   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:05.741678   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:06.241201   34792 type.go:168] "Request Body" body=""
	I1009 18:26:06.241271   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:06.241594   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:06.741216   34792 type.go:168] "Request Body" body=""
	I1009 18:26:06.741302   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:06.741638   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:07.241228   34792 type.go:168] "Request Body" body=""
	I1009 18:26:07.241309   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:07.241642   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:07.741295   34792 type.go:168] "Request Body" body=""
	I1009 18:26:07.741364   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:07.741662   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:07.741715   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:08.241237   34792 type.go:168] "Request Body" body=""
	I1009 18:26:08.241302   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:08.241600   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:08.741196   34792 type.go:168] "Request Body" body=""
	I1009 18:26:08.741257   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:08.741600   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:09.241564   34792 type.go:168] "Request Body" body=""
	I1009 18:26:09.241629   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:09.241949   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:09.741615   34792 type.go:168] "Request Body" body=""
	I1009 18:26:09.741680   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:09.741985   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:09.742040   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:10.241636   34792 type.go:168] "Request Body" body=""
	I1009 18:26:10.241706   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:10.242002   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:10.741894   34792 type.go:168] "Request Body" body=""
	I1009 18:26:10.741959   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:10.742285   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:11.241928   34792 type.go:168] "Request Body" body=""
	I1009 18:26:11.241997   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:11.242350   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:11.742032   34792 type.go:168] "Request Body" body=""
	I1009 18:26:11.742100   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:11.742451   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:11.742508   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:12.241054   34792 type.go:168] "Request Body" body=""
	I1009 18:26:12.241123   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:12.241536   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:12.741176   34792 type.go:168] "Request Body" body=""
	I1009 18:26:12.741242   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:12.741599   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:13.241179   34792 type.go:168] "Request Body" body=""
	I1009 18:26:13.241237   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:13.241552   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:13.741164   34792 type.go:168] "Request Body" body=""
	I1009 18:26:13.741229   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:13.741597   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:14.241174   34792 type.go:168] "Request Body" body=""
	I1009 18:26:14.241246   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:14.241576   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:14.241632   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:14.741184   34792 type.go:168] "Request Body" body=""
	I1009 18:26:14.741250   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:14.741553   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:15.241116   34792 type.go:168] "Request Body" body=""
	I1009 18:26:15.241224   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:15.241537   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:15.741317   34792 type.go:168] "Request Body" body=""
	I1009 18:26:15.741389   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:15.741689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:16.241241   34792 type.go:168] "Request Body" body=""
	I1009 18:26:16.241305   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:16.241632   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:16.241683   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:16.741260   34792 type.go:168] "Request Body" body=""
	I1009 18:26:16.741325   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:16.741630   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:17.241224   34792 type.go:168] "Request Body" body=""
	I1009 18:26:17.241286   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:17.241599   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:17.741225   34792 type.go:168] "Request Body" body=""
	I1009 18:26:17.741291   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:17.741594   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:18.241198   34792 type.go:168] "Request Body" body=""
	I1009 18:26:18.241264   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:18.241577   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:18.741185   34792 type.go:168] "Request Body" body=""
	I1009 18:26:18.741257   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:18.741577   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:18.741626   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:19.241353   34792 type.go:168] "Request Body" body=""
	I1009 18:26:19.241426   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:19.241744   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:19.741299   34792 type.go:168] "Request Body" body=""
	I1009 18:26:19.741364   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:19.741663   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:20.241246   34792 type.go:168] "Request Body" body=""
	I1009 18:26:20.241316   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:20.241629   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:20.741541   34792 type.go:168] "Request Body" body=""
	I1009 18:26:20.741607   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:20.741914   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:20.741966   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:21.241518   34792 type.go:168] "Request Body" body=""
	I1009 18:26:21.241583   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:21.241885   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:21.741448   34792 type.go:168] "Request Body" body=""
	I1009 18:26:21.741515   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:21.741816   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:22.241407   34792 type.go:168] "Request Body" body=""
	I1009 18:26:22.241471   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:22.241770   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:22.741331   34792 type.go:168] "Request Body" body=""
	I1009 18:26:22.741400   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:22.741698   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:23.241258   34792 type.go:168] "Request Body" body=""
	I1009 18:26:23.241325   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:23.241638   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:23.241693   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:23.741220   34792 type.go:168] "Request Body" body=""
	I1009 18:26:23.741300   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:23.741602   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:24.241221   34792 type.go:168] "Request Body" body=""
	I1009 18:26:24.241295   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:24.241598   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:24.741133   34792 type.go:168] "Request Body" body=""
	I1009 18:26:24.741216   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:24.741539   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:25.241114   34792 type.go:168] "Request Body" body=""
	I1009 18:26:25.241213   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:25.241546   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:25.741511   34792 type.go:168] "Request Body" body=""
	I1009 18:26:25.741576   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:25.741865   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:25.741922   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:26.241516   34792 type.go:168] "Request Body" body=""
	I1009 18:26:26.241579   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:26.241882   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:26.741449   34792 type.go:168] "Request Body" body=""
	I1009 18:26:26.741511   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:26.741816   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:27.241391   34792 type.go:168] "Request Body" body=""
	I1009 18:26:27.241460   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:27.241802   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:27.741394   34792 type.go:168] "Request Body" body=""
	I1009 18:26:27.741461   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:27.741756   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:28.241317   34792 type.go:168] "Request Body" body=""
	I1009 18:26:28.241388   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:28.241721   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:28.241777   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:28.741288   34792 type.go:168] "Request Body" body=""
	I1009 18:26:28.741355   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:28.741648   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:29.241543   34792 type.go:168] "Request Body" body=""
	I1009 18:26:29.241610   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:29.241914   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:29.741477   34792 type.go:168] "Request Body" body=""
	I1009 18:26:29.741542   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:29.741838   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:30.241416   34792 type.go:168] "Request Body" body=""
	I1009 18:26:30.241476   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:30.241809   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:30.241861   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:30.741676   34792 type.go:168] "Request Body" body=""
	I1009 18:26:30.741745   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:30.742049   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:31.241791   34792 type.go:168] "Request Body" body=""
	I1009 18:26:31.241858   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:31.242183   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:31.741839   34792 type.go:168] "Request Body" body=""
	I1009 18:26:31.741908   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:31.742213   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:32.241895   34792 type.go:168] "Request Body" body=""
	I1009 18:26:32.241956   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:32.242308   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:32.242358   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:32.741973   34792 type.go:168] "Request Body" body=""
	I1009 18:26:32.742037   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:32.742358   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:33.241033   34792 type.go:168] "Request Body" body=""
	I1009 18:26:33.241095   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:33.241444   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:33.741092   34792 type.go:168] "Request Body" body=""
	I1009 18:26:33.741183   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:33.741483   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:34.241043   34792 type.go:168] "Request Body" body=""
	I1009 18:26:34.241106   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:34.241473   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:34.741040   34792 type.go:168] "Request Body" body=""
	I1009 18:26:34.741103   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:34.741434   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:34.741487   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:35.241090   34792 type.go:168] "Request Body" body=""
	I1009 18:26:35.241193   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:35.241503   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:35.741438   34792 type.go:168] "Request Body" body=""
	I1009 18:26:35.741506   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:35.741812   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:36.241366   34792 type.go:168] "Request Body" body=""
	I1009 18:26:36.241429   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:36.241735   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:36.741315   34792 type.go:168] "Request Body" body=""
	I1009 18:26:36.741379   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:36.741698   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:36.741752   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:37.241310   34792 type.go:168] "Request Body" body=""
	I1009 18:26:37.241385   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:37.241689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:37.741251   34792 type.go:168] "Request Body" body=""
	I1009 18:26:37.741329   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:37.741650   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:38.241235   34792 type.go:168] "Request Body" body=""
	I1009 18:26:38.241299   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:38.241604   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:38.741249   34792 type.go:168] "Request Body" body=""
	I1009 18:26:38.741311   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:38.741610   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:39.241542   34792 type.go:168] "Request Body" body=""
	I1009 18:26:39.241604   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:39.241903   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:39.241956   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:39.741468   34792 type.go:168] "Request Body" body=""
	I1009 18:26:39.741530   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:39.741834   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:40.241427   34792 type.go:168] "Request Body" body=""
	I1009 18:26:40.241499   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:40.241835   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:40.741723   34792 type.go:168] "Request Body" body=""
	I1009 18:26:40.741789   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:40.742120   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:41.241751   34792 type.go:168] "Request Body" body=""
	I1009 18:26:41.241818   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:41.242203   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:41.242264   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:41.741856   34792 type.go:168] "Request Body" body=""
	I1009 18:26:41.741921   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:41.742256   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:42.241895   34792 type.go:168] "Request Body" body=""
	I1009 18:26:42.241958   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:42.242315   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:42.741994   34792 type.go:168] "Request Body" body=""
	I1009 18:26:42.742065   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:42.742389   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:43.240973   34792 type.go:168] "Request Body" body=""
	I1009 18:26:43.241061   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:43.241393   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:43.740990   34792 type.go:168] "Request Body" body=""
	I1009 18:26:43.741062   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:43.741419   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:43.741468   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:44.241000   34792 type.go:168] "Request Body" body=""
	I1009 18:26:44.241064   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:44.241416   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:44.740980   34792 type.go:168] "Request Body" body=""
	I1009 18:26:44.741068   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:44.741391   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:45.241003   34792 type.go:168] "Request Body" body=""
	I1009 18:26:45.241071   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:45.241415   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:45.741236   34792 type.go:168] "Request Body" body=""
	I1009 18:26:45.741300   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:45.741605   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:45.741660   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:46.241187   34792 type.go:168] "Request Body" body=""
	I1009 18:26:46.241257   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:46.241559   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:46.741123   34792 type.go:168] "Request Body" body=""
	I1009 18:26:46.741200   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:46.741513   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:47.241090   34792 type.go:168] "Request Body" body=""
	I1009 18:26:47.241182   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:47.241488   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:47.741079   34792 type.go:168] "Request Body" body=""
	I1009 18:26:47.741166   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:47.741472   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:48.241093   34792 type.go:168] "Request Body" body=""
	I1009 18:26:48.241186   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:48.241592   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:48.241645   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:48.741196   34792 type.go:168] "Request Body" body=""
	I1009 18:26:48.741263   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:48.741567   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:49.241340   34792 type.go:168] "Request Body" body=""
	I1009 18:26:49.241413   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:49.241715   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:49.741320   34792 type.go:168] "Request Body" body=""
	I1009 18:26:49.741390   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:49.741693   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:50.241274   34792 type.go:168] "Request Body" body=""
	I1009 18:26:50.241356   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:50.241686   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:50.241739   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:50.741604   34792 type.go:168] "Request Body" body=""
	I1009 18:26:50.741672   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:50.741979   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:51.241631   34792 type.go:168] "Request Body" body=""
	I1009 18:26:51.241697   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:51.242059   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:51.741717   34792 type.go:168] "Request Body" body=""
	I1009 18:26:51.741781   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:51.742121   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:52.241772   34792 type.go:168] "Request Body" body=""
	I1009 18:26:52.241840   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:52.242193   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:52.242249   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:52.741892   34792 type.go:168] "Request Body" body=""
	I1009 18:26:52.741970   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:52.742329   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:53.241997   34792 type.go:168] "Request Body" body=""
	I1009 18:26:53.242075   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:53.242417   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:53.741024   34792 type.go:168] "Request Body" body=""
	I1009 18:26:53.741093   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:53.741440   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:54.241044   34792 type.go:168] "Request Body" body=""
	I1009 18:26:54.241125   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:54.241492   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:54.741067   34792 type.go:168] "Request Body" body=""
	I1009 18:26:54.741161   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:54.741529   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:54.741583   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:55.241129   34792 type.go:168] "Request Body" body=""
	I1009 18:26:55.241221   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:55.241609   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:55.741431   34792 type.go:168] "Request Body" body=""
	I1009 18:26:55.741496   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:55.741812   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:56.241424   34792 type.go:168] "Request Body" body=""
	I1009 18:26:56.241490   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:56.241796   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:56.741393   34792 type.go:168] "Request Body" body=""
	I1009 18:26:56.741462   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:56.741773   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:56.741826   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:57.241378   34792 type.go:168] "Request Body" body=""
	I1009 18:26:57.241453   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:57.241771   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:57.741379   34792 type.go:168] "Request Body" body=""
	I1009 18:26:57.741447   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:57.741762   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:58.241330   34792 type.go:168] "Request Body" body=""
	I1009 18:26:58.241413   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:58.241723   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:58.741322   34792 type.go:168] "Request Body" body=""
	I1009 18:26:58.741396   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:58.741713   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:59.241600   34792 type.go:168] "Request Body" body=""
	I1009 18:26:59.241669   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:59.241990   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:59.242043   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:59.741668   34792 type.go:168] "Request Body" body=""
	I1009 18:26:59.741732   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:59.742052   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:00.241717   34792 type.go:168] "Request Body" body=""
	I1009 18:27:00.241783   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:00.242095   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:00.741931   34792 type.go:168] "Request Body" body=""
	I1009 18:27:00.742008   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:00.742337   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:01.242007   34792 type.go:168] "Request Body" body=""
	I1009 18:27:01.242099   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:01.242479   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:01.242534   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:01.741056   34792 type.go:168] "Request Body" body=""
	I1009 18:27:01.741158   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:01.741495   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:02.241218   34792 type.go:168] "Request Body" body=""
	I1009 18:27:02.241281   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:02.241609   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:02.741259   34792 type.go:168] "Request Body" body=""
	I1009 18:27:02.741340   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:02.741682   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:03.241295   34792 type.go:168] "Request Body" body=""
	I1009 18:27:03.241359   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:03.241698   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:03.741242   34792 type.go:168] "Request Body" body=""
	I1009 18:27:03.741308   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:03.741628   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:03.741679   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:04.241208   34792 type.go:168] "Request Body" body=""
	I1009 18:27:04.241270   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:04.241627   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:04.741229   34792 type.go:168] "Request Body" body=""
	I1009 18:27:04.741287   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:04.741583   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:05.241255   34792 type.go:168] "Request Body" body=""
	I1009 18:27:05.241340   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:05.241742   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:05.741635   34792 type.go:168] "Request Body" body=""
	I1009 18:27:05.741703   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:05.742066   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:05.742130   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:06.241658   34792 type.go:168] "Request Body" body=""
	I1009 18:27:06.241731   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:06.242079   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:06.741854   34792 type.go:168] "Request Body" body=""
	I1009 18:27:06.741922   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:06.742243   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:07.241927   34792 type.go:168] "Request Body" body=""
	I1009 18:27:07.241997   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:07.242459   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:07.741045   34792 type.go:168] "Request Body" body=""
	I1009 18:27:07.741126   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:07.741466   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:08.241033   34792 type.go:168] "Request Body" body=""
	I1009 18:27:08.241100   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:08.241458   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:08.241511   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:08.741034   34792 type.go:168] "Request Body" body=""
	I1009 18:27:08.741096   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:08.741406   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:09.241378   34792 type.go:168] "Request Body" body=""
	I1009 18:27:09.241439   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:09.241764   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:09.741349   34792 type.go:168] "Request Body" body=""
	I1009 18:27:09.741417   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:09.741711   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:10.241285   34792 type.go:168] "Request Body" body=""
	I1009 18:27:10.241365   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:10.241692   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:10.241753   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:10.741690   34792 type.go:168] "Request Body" body=""
	I1009 18:27:10.741757   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:10.742128   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:11.241848   34792 type.go:168] "Request Body" body=""
	I1009 18:27:11.241913   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:11.242250   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:11.741958   34792 type.go:168] "Request Body" body=""
	I1009 18:27:11.742022   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:11.742364   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:12.240970   34792 type.go:168] "Request Body" body=""
	I1009 18:27:12.241079   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:12.241437   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:12.741083   34792 type.go:168] "Request Body" body=""
	I1009 18:27:12.741169   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:12.741518   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:12.741570   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:13.241130   34792 type.go:168] "Request Body" body=""
	I1009 18:27:13.241246   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:13.241579   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:13.741161   34792 type.go:168] "Request Body" body=""
	I1009 18:27:13.741231   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:13.741554   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:14.241185   34792 type.go:168] "Request Body" body=""
	I1009 18:27:14.241247   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:14.241557   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:14.741128   34792 type.go:168] "Request Body" body=""
	I1009 18:27:14.741223   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:14.741560   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:14.741616   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:15.241160   34792 type.go:168] "Request Body" body=""
	I1009 18:27:15.241231   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:15.241537   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:15.741362   34792 type.go:168] "Request Body" body=""
	I1009 18:27:15.741426   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:15.741731   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:16.241332   34792 type.go:168] "Request Body" body=""
	I1009 18:27:16.241395   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:16.241711   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:16.741290   34792 type.go:168] "Request Body" body=""
	I1009 18:27:16.741362   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:16.741691   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:16.741746   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:17.241296   34792 type.go:168] "Request Body" body=""
	I1009 18:27:17.241365   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:17.241677   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:17.741260   34792 type.go:168] "Request Body" body=""
	I1009 18:27:17.741330   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:17.741645   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:18.241233   34792 type.go:168] "Request Body" body=""
	I1009 18:27:18.241315   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:18.241649   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:18.741254   34792 type.go:168] "Request Body" body=""
	I1009 18:27:18.741327   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:18.741641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:19.241576   34792 type.go:168] "Request Body" body=""
	I1009 18:27:19.241642   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:19.241965   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:19.242017   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:19.741671   34792 type.go:168] "Request Body" body=""
	I1009 18:27:19.741744   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:19.742057   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:20.241721   34792 type.go:168] "Request Body" body=""
	I1009 18:27:20.241782   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:20.242076   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:20.742009   34792 type.go:168] "Request Body" body=""
	I1009 18:27:20.742090   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:20.742453   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:21.241057   34792 type.go:168] "Request Body" body=""
	I1009 18:27:21.241122   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:21.241467   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:21.741089   34792 type.go:168] "Request Body" body=""
	I1009 18:27:21.741181   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:21.741490   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:21.741542   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:22.241108   34792 type.go:168] "Request Body" body=""
	I1009 18:27:22.241209   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:22.241541   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:22.741234   34792 type.go:168] "Request Body" body=""
	I1009 18:27:22.741302   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:22.741654   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:23.241319   34792 type.go:168] "Request Body" body=""
	I1009 18:27:23.241387   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:23.241701   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:23.741234   34792 type.go:168] "Request Body" body=""
	I1009 18:27:23.741296   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:23.741605   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:23.741658   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:24.241213   34792 type.go:168] "Request Body" body=""
	I1009 18:27:24.241289   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:24.241598   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:24.741228   34792 type.go:168] "Request Body" body=""
	I1009 18:27:24.741292   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:24.741613   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:25.241253   34792 type.go:168] "Request Body" body=""
	I1009 18:27:25.241322   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:25.241625   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:25.741545   34792 type.go:168] "Request Body" body=""
	I1009 18:27:25.741614   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:25.741927   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:25.742024   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:26.241505   34792 type.go:168] "Request Body" body=""
	I1009 18:27:26.241567   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:26.241878   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:26.741454   34792 type.go:168] "Request Body" body=""
	I1009 18:27:26.741518   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:26.741875   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:27.241441   34792 type.go:168] "Request Body" body=""
	I1009 18:27:27.241506   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:27.241818   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:27.741400   34792 type.go:168] "Request Body" body=""
	I1009 18:27:27.741470   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:27.741797   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:28.241401   34792 type.go:168] "Request Body" body=""
	I1009 18:27:28.241474   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:28.241808   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:28.241862   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:28.741402   34792 type.go:168] "Request Body" body=""
	I1009 18:27:28.741472   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:28.741806   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:29.241748   34792 type.go:168] "Request Body" body=""
	I1009 18:27:29.241819   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:29.242161   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:29.741821   34792 type.go:168] "Request Body" body=""
	I1009 18:27:29.741885   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:29.742231   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:30.241904   34792 type.go:168] "Request Body" body=""
	I1009 18:27:30.241974   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:30.242318   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:30.242382   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:30.741035   34792 type.go:168] "Request Body" body=""
	I1009 18:27:30.741108   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:30.741409   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:31.241068   34792 type.go:168] "Request Body" body=""
	I1009 18:27:31.241132   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:31.241479   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:31.741086   34792 type.go:168] "Request Body" body=""
	I1009 18:27:31.741176   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:31.741471   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:32.241219   34792 type.go:168] "Request Body" body=""
	I1009 18:27:32.241295   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:32.241610   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:32.741219   34792 type.go:168] "Request Body" body=""
	I1009 18:27:32.741298   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:32.741606   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:32.741661   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:33.241210   34792 type.go:168] "Request Body" body=""
	I1009 18:27:33.241276   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:33.241588   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:33.741182   34792 type.go:168] "Request Body" body=""
	I1009 18:27:33.741248   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:33.741547   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:34.241192   34792 type.go:168] "Request Body" body=""
	I1009 18:27:34.241262   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:34.241590   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:34.741212   34792 type.go:168] "Request Body" body=""
	I1009 18:27:34.741284   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:34.741609   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:35.241253   34792 type.go:168] "Request Body" body=""
	I1009 18:27:35.241323   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:35.241649   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:35.241703   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:35.741567   34792 type.go:168] "Request Body" body=""
	I1009 18:27:35.741632   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:35.741973   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:36.241654   34792 type.go:168] "Request Body" body=""
	I1009 18:27:36.241728   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:36.242025   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:36.741778   34792 type.go:168] "Request Body" body=""
	I1009 18:27:36.741844   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:36.742212   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:37.241852   34792 type.go:168] "Request Body" body=""
	I1009 18:27:37.241925   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:37.242276   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:37.242330   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:37.741978   34792 type.go:168] "Request Body" body=""
	I1009 18:27:37.742052   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:37.742377   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:38.240952   34792 type.go:168] "Request Body" body=""
	I1009 18:27:38.241027   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:38.241428   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:38.741115   34792 type.go:168] "Request Body" body=""
	I1009 18:27:38.741222   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:38.741569   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:39.241464   34792 type.go:168] "Request Body" body=""
	I1009 18:27:39.241531   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:39.241853   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:39.741475   34792 type.go:168] "Request Body" body=""
	I1009 18:27:39.741552   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:39.741888   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:39.741940   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:40.241482   34792 type.go:168] "Request Body" body=""
	I1009 18:27:40.241546   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:40.241865   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:40.741822   34792 type.go:168] "Request Body" body=""
	I1009 18:27:40.741912   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:40.742310   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:41.241924   34792 type.go:168] "Request Body" body=""
	I1009 18:27:41.241992   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:41.242352   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:41.742037   34792 type.go:168] "Request Body" body=""
	I1009 18:27:41.742123   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:41.742467   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:41.742533   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:42.241062   34792 type.go:168] "Request Body" body=""
	I1009 18:27:42.241131   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:42.241483   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:42.741199   34792 type.go:168] "Request Body" body=""
	I1009 18:27:42.741261   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:42.741576   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:43.241209   34792 type.go:168] "Request Body" body=""
	I1009 18:27:43.241285   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:43.241620   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:43.741257   34792 type.go:168] "Request Body" body=""
	I1009 18:27:43.741321   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:43.741675   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:44.241258   34792 type.go:168] "Request Body" body=""
	I1009 18:27:44.241325   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:44.241630   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:44.241684   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:44.741229   34792 type.go:168] "Request Body" body=""
	I1009 18:27:44.741292   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:44.741621   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:45.241009   34792 type.go:168] "Request Body" body=""
	I1009 18:27:45.241089   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:45.241464   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:45.741255   34792 type.go:168] "Request Body" body=""
	I1009 18:27:45.741321   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:45.741658   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:46.241261   34792 type.go:168] "Request Body" body=""
	I1009 18:27:46.241333   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:46.241687   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:46.241736   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:46.741271   34792 type.go:168] "Request Body" body=""
	I1009 18:27:46.741338   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:46.741695   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:47.241266   34792 type.go:168] "Request Body" body=""
	I1009 18:27:47.241341   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:47.241666   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:47.741243   34792 type.go:168] "Request Body" body=""
	I1009 18:27:47.741310   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:47.741653   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:48.241251   34792 type.go:168] "Request Body" body=""
	I1009 18:27:48.241342   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:48.241651   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:48.741262   34792 type.go:168] "Request Body" body=""
	I1009 18:27:48.741328   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:48.741647   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:48.741699   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:49.241692   34792 type.go:168] "Request Body" body=""
	I1009 18:27:49.241772   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:49.242116   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:49.741779   34792 type.go:168] "Request Body" body=""
	I1009 18:27:49.741846   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:49.742256   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:50.241914   34792 type.go:168] "Request Body" body=""
	I1009 18:27:50.241978   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:50.242357   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:50.741207   34792 type.go:168] "Request Body" body=""
	I1009 18:27:50.741284   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:50.741645   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:51.241236   34792 type.go:168] "Request Body" body=""
	I1009 18:27:51.241313   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:51.241642   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:51.241696   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:51.741256   34792 type.go:168] "Request Body" body=""
	I1009 18:27:51.741385   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:51.741740   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:52.241321   34792 type.go:168] "Request Body" body=""
	I1009 18:27:52.241392   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:52.241724   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:52.741315   34792 type.go:168] "Request Body" body=""
	I1009 18:27:52.741382   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:52.741729   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:53.241330   34792 type.go:168] "Request Body" body=""
	I1009 18:27:53.241398   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:53.241736   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:53.241797   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:53.741402   34792 type.go:168] "Request Body" body=""
	I1009 18:27:53.741465   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:53.741821   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:54.241418   34792 type.go:168] "Request Body" body=""
	I1009 18:27:54.241482   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:54.241803   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:54.741399   34792 type.go:168] "Request Body" body=""
	I1009 18:27:54.741462   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:54.741794   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:55.241395   34792 type.go:168] "Request Body" body=""
	I1009 18:27:55.241460   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:55.241801   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:55.241851   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:55.741689   34792 type.go:168] "Request Body" body=""
	I1009 18:27:55.741763   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:55.742091   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:56.241733   34792 type.go:168] "Request Body" body=""
	I1009 18:27:56.241801   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:56.242128   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:56.741823   34792 type.go:168] "Request Body" body=""
	I1009 18:27:56.741896   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:56.742277   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:57.241950   34792 type.go:168] "Request Body" body=""
	I1009 18:27:57.242025   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:57.242395   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:57.242451   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:57.741025   34792 type.go:168] "Request Body" body=""
	I1009 18:27:57.741093   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:57.741454   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:58.241127   34792 type.go:168] "Request Body" body=""
	I1009 18:27:58.241225   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:58.241560   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:58.741208   34792 type.go:168] "Request Body" body=""
	I1009 18:27:58.741281   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:58.741640   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:59.241113   34792 node_ready.go:38] duration metric: took 6m0.000256287s for node "functional-753440" to be "Ready" ...
	I1009 18:27:59.244464   34792 out.go:203] 
	W1009 18:27:59.246567   34792 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 18:27:59.246590   34792 out.go:285] * 
	W1009 18:27:59.248293   34792 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:27:59.250105   34792 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 18:28:08 functional-753440 crio[2938]: time="2025-10-09T18:28:08.354269441Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=4ea07aef-5047-4eee-8275-ce73425529bc name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:28:08 functional-753440 crio[2938]: time="2025-10-09T18:28:08.655588461Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=3a784bd7-93f3-4a3d-b375-a3f78dc9eb94 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:28:08 functional-753440 crio[2938]: time="2025-10-09T18:28:08.655712374Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=3a784bd7-93f3-4a3d-b375-a3f78dc9eb94 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:28:08 functional-753440 crio[2938]: time="2025-10-09T18:28:08.65573974Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=3a784bd7-93f3-4a3d-b375-a3f78dc9eb94 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.096105121Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=3cbc98a3-8aa5-47a9-bde9-efbb88f8447a name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.096276461Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=3cbc98a3-8aa5-47a9-bde9-efbb88f8447a name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.096313018Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=3cbc98a3-8aa5-47a9-bde9-efbb88f8447a name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.120648151Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=c13c7024-a40b-45e6-b0a3-d7e04ef340f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.120768204Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=c13c7024-a40b-45e6-b0a3-d7e04ef340f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.120797724Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=c13c7024-a40b-45e6-b0a3-d7e04ef340f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.14496176Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=42cee86d-2d7a-4cec-9d74-65293f5a0cff name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.14509242Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=42cee86d-2d7a-4cec-9d74-65293f5a0cff name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.145124298Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=42cee86d-2d7a-4cec-9d74-65293f5a0cff name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.542737234Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=065175c8-91bf-4012-b9b3-5d9f72220ddb name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.544713622Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=b5c624b3-1d60-42b4-8984-d4a17802b148 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.545742903Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-753440/kube-controller-manager" id=ee9dc85b-d56f-424a-970b-1b05c2c11a8c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.546007765Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.549577694Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.549972908Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.566608991Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ee9dc85b-d56f-424a-970b-1b05c2c11a8c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.567967013Z" level=info msg="createCtr: deleting container ID b3498f589bd49e0b9c940b743b5094ba76aa060907c421c898d65866b3194079 from idIndex" id=ee9dc85b-d56f-424a-970b-1b05c2c11a8c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.568007083Z" level=info msg="createCtr: removing container b3498f589bd49e0b9c940b743b5094ba76aa060907c421c898d65866b3194079" id=ee9dc85b-d56f-424a-970b-1b05c2c11a8c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.568042928Z" level=info msg="createCtr: deleting container b3498f589bd49e0b9c940b743b5094ba76aa060907c421c898d65866b3194079 from storage" id=ee9dc85b-d56f-424a-970b-1b05c2c11a8c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.5700798Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-753440_kube-system_ddd5b817e547272bbbe5e6f0c16b8e98_0" id=ee9dc85b-d56f-424a-970b-1b05c2c11a8c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.608865345Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=ffe9e9e8-4dc1-4383-bccb-ffffa17ab717 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:28:11.047426    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:28:11.048021    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:28:11.049622    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:28:11.050034    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:28:11.051577    5262 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:28:11 up  1:10,  0 user,  load average: 0.07, 0.09, 0.10
	Linux functional-753440 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 18:28:01 functional-753440 kubelet[1796]: E1009 18:28:01.576623    1796 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:28:01 functional-753440 kubelet[1796]:         container kube-scheduler start failed in pod kube-scheduler-functional-753440_kube-system(c3332277da3037b9d30e61510b9fdccb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:28:01 functional-753440 kubelet[1796]:  > logger="UnhandledError"
	Oct 09 18:28:01 functional-753440 kubelet[1796]: E1009 18:28:01.576670    1796 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-753440" podUID="c3332277da3037b9d30e61510b9fdccb"
	Oct 09 18:28:03 functional-753440 kubelet[1796]: E1009 18:28:03.583630    1796 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-753440\" not found"
	Oct 09 18:28:06 functional-753440 kubelet[1796]: E1009 18:28:06.541981    1796 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753440\" not found" node="functional-753440"
	Oct 09 18:28:06 functional-753440 kubelet[1796]: E1009 18:28:06.575864    1796 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:28:06 functional-753440 kubelet[1796]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:28:06 functional-753440 kubelet[1796]:  > podSandboxID="3bfe74c8d570ecc37f6892435ddc21354701de89899703d3fea256f249b5032e"
	Oct 09 18:28:06 functional-753440 kubelet[1796]: E1009 18:28:06.575983    1796 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:28:06 functional-753440 kubelet[1796]:         container kube-apiserver start failed in pod kube-apiserver-functional-753440_kube-system(d8200e5d2f7672a0974c7d953c472e15): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:28:06 functional-753440 kubelet[1796]:  > logger="UnhandledError"
	Oct 09 18:28:06 functional-753440 kubelet[1796]: E1009 18:28:06.576024    1796 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-753440" podUID="d8200e5d2f7672a0974c7d953c472e15"
	Oct 09 18:28:07 functional-753440 kubelet[1796]: E1009 18:28:07.054944    1796 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-753440.186ce57ba0b4bd78\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-753440.186ce57ba0b4bd78  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-753440,UID:functional-753440,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-753440 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-753440,},FirstTimestamp:2025-10-09 18:17:53.534958968 +0000 UTC m=+0.381579824,LastTimestamp:2025-10-09 18:17:53.536403063 +0000 UTC m=+0.383023919,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Reporting
Instance:functional-753440,}"
	Oct 09 18:28:08 functional-753440 kubelet[1796]: E1009 18:28:08.228302    1796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-753440?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 09 18:28:08 functional-753440 kubelet[1796]: I1009 18:28:08.428720    1796 kubelet_node_status.go:75] "Attempting to register node" node="functional-753440"
	Oct 09 18:28:08 functional-753440 kubelet[1796]: E1009 18:28:08.429128    1796 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-753440"
	Oct 09 18:28:09 functional-753440 kubelet[1796]: E1009 18:28:09.542285    1796 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753440\" not found" node="functional-753440"
	Oct 09 18:28:09 functional-753440 kubelet[1796]: E1009 18:28:09.570410    1796 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:28:09 functional-753440 kubelet[1796]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:28:09 functional-753440 kubelet[1796]:  > podSandboxID="a0f669ac9226ee4ac7b841aacfe05ece4235d10b02fe7bb351eab32cadb9e24d"
	Oct 09 18:28:09 functional-753440 kubelet[1796]: E1009 18:28:09.570509    1796 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:28:09 functional-753440 kubelet[1796]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-753440_kube-system(ddd5b817e547272bbbe5e6f0c16b8e98): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:28:09 functional-753440 kubelet[1796]:  > logger="UnhandledError"
	Oct 09 18:28:09 functional-753440 kubelet[1796]: E1009 18:28:09.570540    1796 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-753440" podUID="ddd5b817e547272bbbe5e6f0c16b8e98"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753440 -n functional-753440
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753440 -n functional-753440: exit status 2 (311.733995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753440" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (2.19s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (2.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-753440 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-753440 get pods: exit status 1 (101.00917ms)

                                                
                                                
** stderr ** 
	E1009 18:28:11.996349   40695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:28:11.996720   40695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:28:11.998245   40695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:28:11.998576   40695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:28:12.000028   40695 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-753440 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753440
helpers_test.go:243: (dbg) docker inspect functional-753440:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205",
	        "Created": "2025-10-09T18:13:38.612842612Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 29511,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:13:38.64668907Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/hostname",
	        "HostsPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/hosts",
	        "LogPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205-json.log",
	        "Name": "/functional-753440",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753440:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-753440",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205",
	                "LowerDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753440",
	                "Source": "/var/lib/docker/volumes/functional-753440/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753440",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753440",
	                "name.minikube.sigs.k8s.io": "functional-753440",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d81e656cb7fd298b6be7b84ddafb7e6d0b2df1b9904e1c444b24eb780385409d",
	            "SandboxKey": "/var/run/docker/netns/d81e656cb7fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753440": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:52:a9:f3:ce:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d69cee380b2506f35d197ee18a95b90b110e191b547e1220873c5484ffc92ad3",
	                    "EndpointID": "2f780bc31b7359d4036c8b32e09c7f7657923ca8c46e8392506706282465c3ec",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753440",
	                        "694bf539948e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-753440 -n functional-753440
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-753440 -n functional-753440: exit status 2 (292.884359ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 logs -n 25
helpers_test.go:260: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ nospam-663194 --log_dir /tmp/nospam-663194 pause                                                              │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ unpause │ nospam-663194 --log_dir /tmp/nospam-663194 unpause                                                            │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ unpause │ nospam-663194 --log_dir /tmp/nospam-663194 unpause                                                            │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ unpause │ nospam-663194 --log_dir /tmp/nospam-663194 unpause                                                            │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ stop    │ nospam-663194 --log_dir /tmp/nospam-663194 stop                                                               │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ stop    │ nospam-663194 --log_dir /tmp/nospam-663194 stop                                                               │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ stop    │ nospam-663194 --log_dir /tmp/nospam-663194 stop                                                               │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ delete  │ -p nospam-663194                                                                                              │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ start   │ -p functional-753440 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │                     │
	│ start   │ -p functional-753440 --alsologtostderr -v=8                                                                   │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:21 UTC │                     │
	│ cache   │ functional-753440 cache add registry.k8s.io/pause:3.1                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ functional-753440 cache add registry.k8s.io/pause:3.3                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ functional-753440 cache add registry.k8s.io/pause:latest                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ functional-753440 cache add minikube-local-cache-test:functional-753440                                       │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ functional-753440 cache delete minikube-local-cache-test:functional-753440                                    │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ ssh     │ functional-753440 ssh sudo crictl images                                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ ssh     │ functional-753440 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ ssh     │ functional-753440 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │                     │
	│ cache   │ functional-753440 cache reload                                                                                │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ ssh     │ functional-753440 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ kubectl │ functional-753440 kubectl -- --context functional-753440 get pods                                             │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:21:55
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:21:55.407242   34792 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:21:55.407482   34792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:21:55.407490   34792 out.go:374] Setting ErrFile to fd 2...
	I1009 18:21:55.407494   34792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:21:55.407669   34792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:21:55.408109   34792 out.go:368] Setting JSON to false
	I1009 18:21:55.408948   34792 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3863,"bootTime":1760030252,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:21:55.409029   34792 start.go:141] virtualization: kvm guest
	I1009 18:21:55.411208   34792 out.go:179] * [functional-753440] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:21:55.412706   34792 notify.go:220] Checking for updates...
	I1009 18:21:55.412728   34792 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:21:55.414107   34792 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:21:55.415609   34792 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:21:55.417005   34792 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:21:55.418411   34792 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:21:55.419884   34792 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:21:55.421538   34792 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:21:55.421658   34792 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:21:55.445068   34792 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:21:55.445204   34792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:21:55.504624   34792 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:21:55.494450296 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:21:55.504746   34792 docker.go:318] overlay module found
	I1009 18:21:55.507261   34792 out.go:179] * Using the docker driver based on existing profile
	I1009 18:21:55.508504   34792 start.go:305] selected driver: docker
	I1009 18:21:55.508518   34792 start.go:925] validating driver "docker" against &{Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:21:55.508594   34792 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:21:55.508665   34792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:21:55.566793   34792 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:21:55.557358643 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:21:55.567631   34792 cni.go:84] Creating CNI manager for ""
	I1009 18:21:55.567714   34792 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:21:55.567780   34792 start.go:349] cluster config:
	{Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:21:55.569913   34792 out.go:179] * Starting "functional-753440" primary control-plane node in "functional-753440" cluster
	I1009 18:21:55.571250   34792 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:21:55.572672   34792 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:21:55.573890   34792 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:21:55.573921   34792 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:21:55.573933   34792 cache.go:64] Caching tarball of preloaded images
	I1009 18:21:55.573992   34792 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:21:55.574016   34792 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:21:55.574025   34792 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:21:55.574109   34792 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/config.json ...
	I1009 18:21:55.593603   34792 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 18:21:55.593631   34792 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 18:21:55.593646   34792 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:21:55.593672   34792 start.go:360] acquireMachinesLock for functional-753440: {Name:mka6dd10318522f9d68a16550e4b04812fa22004 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:21:55.593732   34792 start.go:364] duration metric: took 38.489µs to acquireMachinesLock for "functional-753440"
	I1009 18:21:55.593749   34792 start.go:96] Skipping create...Using existing machine configuration
	I1009 18:21:55.593758   34792 fix.go:54] fixHost starting: 
	I1009 18:21:55.593970   34792 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
	I1009 18:21:55.610925   34792 fix.go:112] recreateIfNeeded on functional-753440: state=Running err=<nil>
	W1009 18:21:55.610951   34792 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 18:21:55.612681   34792 out.go:252] * Updating the running docker "functional-753440" container ...
	I1009 18:21:55.612704   34792 machine.go:93] provisionDockerMachine start ...
	I1009 18:21:55.612764   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:55.630174   34792 main.go:141] libmachine: Using SSH client type: native
	I1009 18:21:55.630389   34792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:21:55.630401   34792 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:21:55.773949   34792 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753440
	
	I1009 18:21:55.773975   34792 ubuntu.go:182] provisioning hostname "functional-753440"
	I1009 18:21:55.774031   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:55.792726   34792 main.go:141] libmachine: Using SSH client type: native
	I1009 18:21:55.792949   34792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:21:55.792962   34792 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-753440 && echo "functional-753440" | sudo tee /etc/hostname
	I1009 18:21:55.945969   34792 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753440
	
	I1009 18:21:55.946040   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:55.963600   34792 main.go:141] libmachine: Using SSH client type: native
	I1009 18:21:55.963821   34792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:21:55.963839   34792 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753440' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753440/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753440' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:21:56.108677   34792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:21:56.108700   34792 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 18:21:56.108717   34792 ubuntu.go:190] setting up certificates
	I1009 18:21:56.108727   34792 provision.go:84] configureAuth start
	I1009 18:21:56.108783   34792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753440
	I1009 18:21:56.127107   34792 provision.go:143] copyHostCerts
	I1009 18:21:56.127166   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:21:56.127197   34792 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 18:21:56.127212   34792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:21:56.127290   34792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 18:21:56.127394   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:21:56.127416   34792 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 18:21:56.127420   34792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:21:56.127449   34792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 18:21:56.127507   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:21:56.127523   34792 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 18:21:56.127526   34792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:21:56.127549   34792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 18:21:56.127598   34792 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.functional-753440 san=[127.0.0.1 192.168.49.2 functional-753440 localhost minikube]
	I1009 18:21:56.380428   34792 provision.go:177] copyRemoteCerts
	I1009 18:21:56.380482   34792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:21:56.380515   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:56.398054   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:56.500395   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 18:21:56.500448   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 18:21:56.517603   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 18:21:56.517655   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 18:21:56.534349   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 18:21:56.534397   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 18:21:56.551305   34792 provision.go:87] duration metric: took 442.551304ms to configureAuth
	I1009 18:21:56.551330   34792 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:21:56.551498   34792 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:21:56.551579   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:56.568651   34792 main.go:141] libmachine: Using SSH client type: native
	I1009 18:21:56.568866   34792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:21:56.568881   34792 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:21:56.838390   34792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:21:56.838414   34792 machine.go:96] duration metric: took 1.225703269s to provisionDockerMachine
	I1009 18:21:56.838426   34792 start.go:293] postStartSetup for "functional-753440" (driver="docker")
	I1009 18:21:56.838437   34792 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:21:56.838510   34792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:21:56.838559   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:56.856450   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:56.959658   34792 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:21:56.963119   34792 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1009 18:21:56.963150   34792 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1009 18:21:56.963158   34792 command_runner.go:130] > VERSION_ID="12"
	I1009 18:21:56.963165   34792 command_runner.go:130] > VERSION="12 (bookworm)"
	I1009 18:21:56.963174   34792 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1009 18:21:56.963179   34792 command_runner.go:130] > ID=debian
	I1009 18:21:56.963186   34792 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1009 18:21:56.963194   34792 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1009 18:21:56.963212   34792 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1009 18:21:56.963315   34792 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:21:56.963334   34792 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:21:56.963342   34792 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 18:21:56.963382   34792 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 18:21:56.963448   34792 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 18:21:56.963463   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /etc/ssl/certs/148802.pem
	I1009 18:21:56.963529   34792 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/test/nested/copy/14880/hosts -> hosts in /etc/test/nested/copy/14880
	I1009 18:21:56.963535   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/test/nested/copy/14880/hosts -> /etc/test/nested/copy/14880/hosts
	I1009 18:21:56.963565   34792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/14880
	I1009 18:21:56.970888   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:21:56.988730   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/test/nested/copy/14880/hosts --> /etc/test/nested/copy/14880/hosts (40 bytes)
	I1009 18:21:57.005907   34792 start.go:296] duration metric: took 167.469505ms for postStartSetup
	I1009 18:21:57.005971   34792 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:21:57.006025   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:57.023806   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:57.123166   34792 command_runner.go:130] > 39%
	I1009 18:21:57.123235   34792 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:21:57.127917   34792 command_runner.go:130] > 179G
	I1009 18:21:57.127948   34792 fix.go:56] duration metric: took 1.534189396s for fixHost
	I1009 18:21:57.127960   34792 start.go:83] releasing machines lock for "functional-753440", held for 1.534218366s
	I1009 18:21:57.128034   34792 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753440
	I1009 18:21:57.145978   34792 ssh_runner.go:195] Run: cat /version.json
	I1009 18:21:57.146019   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:57.146063   34792 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:21:57.146159   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:57.164302   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:57.164547   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:57.263542   34792 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759745255-21703", "minikube_version": "v1.37.0", "commit": "a51fe4b7ffc88febd8814e8831f38772e976d097"}
	I1009 18:21:57.263690   34792 ssh_runner.go:195] Run: systemctl --version
	I1009 18:21:57.316955   34792 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1009 18:21:57.317002   34792 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1009 18:21:57.317022   34792 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1009 18:21:57.317074   34792 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:21:57.353021   34792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 18:21:57.357737   34792 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1009 18:21:57.357788   34792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:21:57.357834   34792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:21:57.365811   34792 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 18:21:57.365833   34792 start.go:495] detecting cgroup driver to use...
	I1009 18:21:57.365861   34792 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:21:57.365903   34792 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:21:57.380237   34792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:21:57.392796   34792 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:21:57.392859   34792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:21:57.407315   34792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:21:57.419892   34792 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:21:57.506572   34792 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:21:57.589596   34792 docker.go:234] disabling docker service ...
	I1009 18:21:57.589673   34792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:21:57.603725   34792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:21:57.615780   34792 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:21:57.696218   34792 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:21:57.781915   34792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:21:57.794534   34792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:21:57.808497   34792 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1009 18:21:57.808534   34792 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:21:57.808589   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.817764   34792 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 18:21:57.817814   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.827115   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.836066   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.844563   34792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:21:57.852458   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.861227   34792 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.869900   34792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:57.878917   34792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:21:57.886570   34792 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1009 18:21:57.886644   34792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:21:57.894517   34792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:21:57.979064   34792 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:21:58.090717   34792 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:21:58.090783   34792 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:21:58.095044   34792 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1009 18:21:58.095068   34792 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1009 18:21:58.095074   34792 command_runner.go:130] > Device: 0,59	Inode: 3803        Links: 1
	I1009 18:21:58.095080   34792 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 18:21:58.095085   34792 command_runner.go:130] > Access: 2025-10-09 18:21:58.072690390 +0000
	I1009 18:21:58.095093   34792 command_runner.go:130] > Modify: 2025-10-09 18:21:58.072690390 +0000
	I1009 18:21:58.095101   34792 command_runner.go:130] > Change: 2025-10-09 18:21:58.072690390 +0000
	I1009 18:21:58.095108   34792 command_runner.go:130] >  Birth: 2025-10-09 18:21:58.072690390 +0000
	I1009 18:21:58.095130   34792 start.go:563] Will wait 60s for crictl version
	I1009 18:21:58.095214   34792 ssh_runner.go:195] Run: which crictl
	I1009 18:21:58.099101   34792 command_runner.go:130] > /usr/local/bin/crictl
	I1009 18:21:58.099187   34792 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:21:58.122816   34792 command_runner.go:130] > Version:  0.1.0
	I1009 18:21:58.122840   34792 command_runner.go:130] > RuntimeName:  cri-o
	I1009 18:21:58.122845   34792 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1009 18:21:58.122850   34792 command_runner.go:130] > RuntimeApiVersion:  v1
	I1009 18:21:58.122867   34792 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:21:58.122920   34792 ssh_runner.go:195] Run: crio --version
	I1009 18:21:58.149899   34792 command_runner.go:130] > crio version 1.34.1
	I1009 18:21:58.149922   34792 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1009 18:21:58.149928   34792 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1009 18:21:58.149933   34792 command_runner.go:130] >    GitTreeState:   dirty
	I1009 18:21:58.149944   34792 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1009 18:21:58.149949   34792 command_runner.go:130] >    GoVersion:      go1.24.6
	I1009 18:21:58.149952   34792 command_runner.go:130] >    Compiler:       gc
	I1009 18:21:58.149957   34792 command_runner.go:130] >    Platform:       linux/amd64
	I1009 18:21:58.149961   34792 command_runner.go:130] >    Linkmode:       static
	I1009 18:21:58.149964   34792 command_runner.go:130] >    BuildTags:
	I1009 18:21:58.149967   34792 command_runner.go:130] >      static
	I1009 18:21:58.149971   34792 command_runner.go:130] >      netgo
	I1009 18:21:58.149975   34792 command_runner.go:130] >      osusergo
	I1009 18:21:58.149978   34792 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1009 18:21:58.149982   34792 command_runner.go:130] >      seccomp
	I1009 18:21:58.149988   34792 command_runner.go:130] >      apparmor
	I1009 18:21:58.149991   34792 command_runner.go:130] >      selinux
	I1009 18:21:58.149998   34792 command_runner.go:130] >    LDFlags:          unknown
	I1009 18:21:58.150002   34792 command_runner.go:130] >    SeccompEnabled:   true
	I1009 18:21:58.150007   34792 command_runner.go:130] >    AppArmorEnabled:  false
	I1009 18:21:58.151351   34792 ssh_runner.go:195] Run: crio --version
	I1009 18:21:58.178662   34792 command_runner.go:130] > crio version 1.34.1
	I1009 18:21:58.178683   34792 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1009 18:21:58.178689   34792 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1009 18:21:58.178693   34792 command_runner.go:130] >    GitTreeState:   dirty
	I1009 18:21:58.178698   34792 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1009 18:21:58.178702   34792 command_runner.go:130] >    GoVersion:      go1.24.6
	I1009 18:21:58.178706   34792 command_runner.go:130] >    Compiler:       gc
	I1009 18:21:58.178714   34792 command_runner.go:130] >    Platform:       linux/amd64
	I1009 18:21:58.178718   34792 command_runner.go:130] >    Linkmode:       static
	I1009 18:21:58.178721   34792 command_runner.go:130] >    BuildTags:
	I1009 18:21:58.178724   34792 command_runner.go:130] >      static
	I1009 18:21:58.178728   34792 command_runner.go:130] >      netgo
	I1009 18:21:58.178732   34792 command_runner.go:130] >      osusergo
	I1009 18:21:58.178735   34792 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1009 18:21:58.178739   34792 command_runner.go:130] >      seccomp
	I1009 18:21:58.178742   34792 command_runner.go:130] >      apparmor
	I1009 18:21:58.178757   34792 command_runner.go:130] >      selinux
	I1009 18:21:58.178764   34792 command_runner.go:130] >    LDFlags:          unknown
	I1009 18:21:58.178768   34792 command_runner.go:130] >    SeccompEnabled:   true
	I1009 18:21:58.178771   34792 command_runner.go:130] >    AppArmorEnabled:  false
	I1009 18:21:58.181232   34792 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:21:58.182844   34792 cli_runner.go:164] Run: docker network inspect functional-753440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:21:58.200852   34792 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:21:58.205024   34792 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1009 18:21:58.205096   34792 kubeadm.go:883] updating cluster {Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:21:58.205232   34792 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:21:58.205276   34792 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:21:58.234303   34792 command_runner.go:130] > {
	I1009 18:21:58.234338   34792 command_runner.go:130] >   "images":  [
	I1009 18:21:58.234345   34792 command_runner.go:130] >     {
	I1009 18:21:58.234355   34792 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1009 18:21:58.234362   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.234369   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1009 18:21:58.234373   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234378   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.234388   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1009 18:21:58.234400   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1009 18:21:58.234409   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234417   34792 command_runner.go:130] >       "size":  "109379124",
	I1009 18:21:58.234426   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.234435   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.234443   34792 command_runner.go:130] >     },
	I1009 18:21:58.234449   34792 command_runner.go:130] >     {
	I1009 18:21:58.234460   34792 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1009 18:21:58.234468   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.234478   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1009 18:21:58.234486   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234494   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.234509   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1009 18:21:58.234523   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1009 18:21:58.234532   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234539   34792 command_runner.go:130] >       "size":  "31470524",
	I1009 18:21:58.234548   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.234565   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.234581   34792 command_runner.go:130] >     },
	I1009 18:21:58.234590   34792 command_runner.go:130] >     {
	I1009 18:21:58.234600   34792 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1009 18:21:58.234610   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.234619   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1009 18:21:58.234627   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234635   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.234649   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1009 18:21:58.234665   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1009 18:21:58.234673   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234680   34792 command_runner.go:130] >       "size":  "76103547",
	I1009 18:21:58.234689   34792 command_runner.go:130] >       "username":  "nonroot",
	I1009 18:21:58.234697   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.234713   34792 command_runner.go:130] >     },
	I1009 18:21:58.234721   34792 command_runner.go:130] >     {
	I1009 18:21:58.234731   34792 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1009 18:21:58.234740   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.234749   34792 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1009 18:21:58.234757   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234765   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.234780   34792 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1009 18:21:58.234794   34792 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1009 18:21:58.234802   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234809   34792 command_runner.go:130] >       "size":  "195976448",
	I1009 18:21:58.234817   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.234824   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.234833   34792 command_runner.go:130] >       },
	I1009 18:21:58.234849   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.234858   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.234864   34792 command_runner.go:130] >     },
	I1009 18:21:58.234871   34792 command_runner.go:130] >     {
	I1009 18:21:58.234882   34792 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1009 18:21:58.234891   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.234906   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1009 18:21:58.234914   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234921   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.234936   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1009 18:21:58.234952   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1009 18:21:58.234960   34792 command_runner.go:130] >       ],
	I1009 18:21:58.234967   34792 command_runner.go:130] >       "size":  "89046001",
	I1009 18:21:58.234976   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.234984   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.234991   34792 command_runner.go:130] >       },
	I1009 18:21:58.234999   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.235008   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.235015   34792 command_runner.go:130] >     },
	I1009 18:21:58.235023   34792 command_runner.go:130] >     {
	I1009 18:21:58.235033   34792 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1009 18:21:58.235042   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.235052   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1009 18:21:58.235059   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235065   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.235078   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1009 18:21:58.235098   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1009 18:21:58.235106   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235113   34792 command_runner.go:130] >       "size":  "76004181",
	I1009 18:21:58.235122   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.235130   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.235152   34792 command_runner.go:130] >       },
	I1009 18:21:58.235159   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.235168   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.235174   34792 command_runner.go:130] >     },
	I1009 18:21:58.235183   34792 command_runner.go:130] >     {
	I1009 18:21:58.235193   34792 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1009 18:21:58.235202   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.235211   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1009 18:21:58.235227   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235236   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.235248   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1009 18:21:58.235262   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1009 18:21:58.235271   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235278   34792 command_runner.go:130] >       "size":  "73138073",
	I1009 18:21:58.235286   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.235294   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.235302   34792 command_runner.go:130] >     },
	I1009 18:21:58.235314   34792 command_runner.go:130] >     {
	I1009 18:21:58.235326   34792 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1009 18:21:58.235333   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.235344   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1009 18:21:58.235352   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235359   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.235373   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1009 18:21:58.235408   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1009 18:21:58.235416   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235424   34792 command_runner.go:130] >       "size":  "53844823",
	I1009 18:21:58.235433   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.235441   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.235450   34792 command_runner.go:130] >       },
	I1009 18:21:58.235456   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.235464   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.235470   34792 command_runner.go:130] >     },
	I1009 18:21:58.235477   34792 command_runner.go:130] >     {
	I1009 18:21:58.235488   34792 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1009 18:21:58.235496   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.235508   34792 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1009 18:21:58.235515   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235522   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.235536   34792 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1009 18:21:58.235550   34792 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1009 18:21:58.235566   34792 command_runner.go:130] >       ],
	I1009 18:21:58.235576   34792 command_runner.go:130] >       "size":  "742092",
	I1009 18:21:58.235582   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.235592   34792 command_runner.go:130] >         "value":  "65535"
	I1009 18:21:58.235599   34792 command_runner.go:130] >       },
	I1009 18:21:58.235606   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.235615   34792 command_runner.go:130] >       "pinned":  true
	I1009 18:21:58.235621   34792 command_runner.go:130] >     }
	I1009 18:21:58.235627   34792 command_runner.go:130] >   ]
	I1009 18:21:58.235633   34792 command_runner.go:130] > }
	I1009 18:21:58.236008   34792 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:21:58.236027   34792 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:21:58.236090   34792 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:21:58.260405   34792 command_runner.go:130] > {
	I1009 18:21:58.260434   34792 command_runner.go:130] >   "images":  [
	I1009 18:21:58.260440   34792 command_runner.go:130] >     {
	I1009 18:21:58.260454   34792 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1009 18:21:58.260464   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.260473   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1009 18:21:58.260483   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260490   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.260505   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1009 18:21:58.260520   34792 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1009 18:21:58.260529   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260540   34792 command_runner.go:130] >       "size":  "109379124",
	I1009 18:21:58.260550   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.260560   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.260566   34792 command_runner.go:130] >     },
	I1009 18:21:58.260575   34792 command_runner.go:130] >     {
	I1009 18:21:58.260586   34792 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1009 18:21:58.260593   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.260606   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1009 18:21:58.260615   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260624   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.260639   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1009 18:21:58.260653   34792 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1009 18:21:58.260661   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260667   34792 command_runner.go:130] >       "size":  "31470524",
	I1009 18:21:58.260674   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.260681   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.260689   34792 command_runner.go:130] >     },
	I1009 18:21:58.260698   34792 command_runner.go:130] >     {
	I1009 18:21:58.260711   34792 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1009 18:21:58.260721   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.260732   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1009 18:21:58.260740   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260746   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.260759   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1009 18:21:58.260769   34792 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1009 18:21:58.260777   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260785   34792 command_runner.go:130] >       "size":  "76103547",
	I1009 18:21:58.260794   34792 command_runner.go:130] >       "username":  "nonroot",
	I1009 18:21:58.260804   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.260812   34792 command_runner.go:130] >     },
	I1009 18:21:58.260817   34792 command_runner.go:130] >     {
	I1009 18:21:58.260829   34792 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1009 18:21:58.260838   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.260848   34792 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1009 18:21:58.260854   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260861   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.260876   34792 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1009 18:21:58.260890   34792 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1009 18:21:58.260897   34792 command_runner.go:130] >       ],
	I1009 18:21:58.260904   34792 command_runner.go:130] >       "size":  "195976448",
	I1009 18:21:58.260914   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.260923   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.260931   34792 command_runner.go:130] >       },
	I1009 18:21:58.260939   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.260949   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.260957   34792 command_runner.go:130] >     },
	I1009 18:21:58.260965   34792 command_runner.go:130] >     {
	I1009 18:21:58.260974   34792 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1009 18:21:58.260984   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.260992   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1009 18:21:58.261000   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261007   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.261018   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1009 18:21:58.261032   34792 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1009 18:21:58.261040   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261047   34792 command_runner.go:130] >       "size":  "89046001",
	I1009 18:21:58.261056   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.261066   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.261073   34792 command_runner.go:130] >       },
	I1009 18:21:58.261083   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.261093   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.261101   34792 command_runner.go:130] >     },
	I1009 18:21:58.261107   34792 command_runner.go:130] >     {
	I1009 18:21:58.261119   34792 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1009 18:21:58.261128   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.261153   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1009 18:21:58.261159   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261169   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.261181   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1009 18:21:58.261196   34792 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1009 18:21:58.261205   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261214   34792 command_runner.go:130] >       "size":  "76004181",
	I1009 18:21:58.261223   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.261234   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.261243   34792 command_runner.go:130] >       },
	I1009 18:21:58.261249   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.261258   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.261266   34792 command_runner.go:130] >     },
	I1009 18:21:58.261270   34792 command_runner.go:130] >     {
	I1009 18:21:58.261283   34792 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1009 18:21:58.261295   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.261306   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1009 18:21:58.261314   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261321   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.261334   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1009 18:21:58.261349   34792 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1009 18:21:58.261356   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261364   34792 command_runner.go:130] >       "size":  "73138073",
	I1009 18:21:58.261372   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.261379   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.261384   34792 command_runner.go:130] >     },
	I1009 18:21:58.261393   34792 command_runner.go:130] >     {
	I1009 18:21:58.261402   34792 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1009 18:21:58.261409   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.261417   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1009 18:21:58.261422   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261428   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.261439   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1009 18:21:58.261460   34792 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1009 18:21:58.261467   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261473   34792 command_runner.go:130] >       "size":  "53844823",
	I1009 18:21:58.261482   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.261491   34792 command_runner.go:130] >         "value":  "0"
	I1009 18:21:58.261498   34792 command_runner.go:130] >       },
	I1009 18:21:58.261507   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.261516   34792 command_runner.go:130] >       "pinned":  false
	I1009 18:21:58.261525   34792 command_runner.go:130] >     },
	I1009 18:21:58.261533   34792 command_runner.go:130] >     {
	I1009 18:21:58.261543   34792 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1009 18:21:58.261549   34792 command_runner.go:130] >       "repoTags":  [
	I1009 18:21:58.261555   34792 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1009 18:21:58.261563   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261570   34792 command_runner.go:130] >       "repoDigests":  [
	I1009 18:21:58.261584   34792 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1009 18:21:58.261597   34792 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1009 18:21:58.261607   34792 command_runner.go:130] >       ],
	I1009 18:21:58.261614   34792 command_runner.go:130] >       "size":  "742092",
	I1009 18:21:58.261620   34792 command_runner.go:130] >       "uid":  {
	I1009 18:21:58.261626   34792 command_runner.go:130] >         "value":  "65535"
	I1009 18:21:58.261632   34792 command_runner.go:130] >       },
	I1009 18:21:58.261636   34792 command_runner.go:130] >       "username":  "",
	I1009 18:21:58.261641   34792 command_runner.go:130] >       "pinned":  true
	I1009 18:21:58.261649   34792 command_runner.go:130] >     }
	I1009 18:21:58.261655   34792 command_runner.go:130] >   ]
	I1009 18:21:58.261663   34792 command_runner.go:130] > }
	I1009 18:21:58.262011   34792 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:21:58.262027   34792 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:21:58.262034   34792 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1009 18:21:58.262124   34792 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-753440 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:21:58.262213   34792 ssh_runner.go:195] Run: crio config
	I1009 18:21:58.302300   34792 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1009 18:21:58.302331   34792 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1009 18:21:58.302340   34792 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1009 18:21:58.302345   34792 command_runner.go:130] > #
	I1009 18:21:58.302356   34792 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1009 18:21:58.302365   34792 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1009 18:21:58.302374   34792 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1009 18:21:58.302388   34792 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1009 18:21:58.302395   34792 command_runner.go:130] > # reload'.
	I1009 18:21:58.302413   34792 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1009 18:21:58.302424   34792 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1009 18:21:58.302434   34792 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1009 18:21:58.302446   34792 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1009 18:21:58.302451   34792 command_runner.go:130] > [crio]
	I1009 18:21:58.302460   34792 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1009 18:21:58.302491   34792 command_runner.go:130] > # containers images, in this directory.
	I1009 18:21:58.302515   34792 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1009 18:21:58.302526   34792 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1009 18:21:58.302534   34792 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1009 18:21:58.302549   34792 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1009 18:21:58.302558   34792 command_runner.go:130] > # imagestore = ""
	I1009 18:21:58.302569   34792 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1009 18:21:58.302588   34792 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1009 18:21:58.302596   34792 command_runner.go:130] > # storage_driver = "overlay"
	I1009 18:21:58.302604   34792 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1009 18:21:58.302618   34792 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1009 18:21:58.302625   34792 command_runner.go:130] > # storage_option = [
	I1009 18:21:58.302630   34792 command_runner.go:130] > # ]
	I1009 18:21:58.302640   34792 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1009 18:21:58.302649   34792 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1009 18:21:58.302660   34792 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1009 18:21:58.302668   34792 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1009 18:21:58.302681   34792 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1009 18:21:58.302689   34792 command_runner.go:130] > # always happen on a node reboot
	I1009 18:21:58.302700   34792 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1009 18:21:58.302714   34792 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1009 18:21:58.302727   34792 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1009 18:21:58.302738   34792 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1009 18:21:58.302745   34792 command_runner.go:130] > # version_file_persist = ""
	I1009 18:21:58.302760   34792 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1009 18:21:58.302779   34792 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1009 18:21:58.302786   34792 command_runner.go:130] > # internal_wipe = true
	I1009 18:21:58.302800   34792 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1009 18:21:58.302809   34792 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1009 18:21:58.302823   34792 command_runner.go:130] > # internal_repair = true
	I1009 18:21:58.302832   34792 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1009 18:21:58.302841   34792 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1009 18:21:58.302850   34792 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1009 18:21:58.302858   34792 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1009 18:21:58.302871   34792 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1009 18:21:58.302877   34792 command_runner.go:130] > [crio.api]
	I1009 18:21:58.302889   34792 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1009 18:21:58.302895   34792 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1009 18:21:58.302903   34792 command_runner.go:130] > # IP address on which the stream server will listen.
	I1009 18:21:58.302908   34792 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1009 18:21:58.302918   34792 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1009 18:21:58.302922   34792 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1009 18:21:58.302928   34792 command_runner.go:130] > # stream_port = "0"
	I1009 18:21:58.302935   34792 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1009 18:21:58.302943   34792 command_runner.go:130] > # stream_enable_tls = false
	I1009 18:21:58.302953   34792 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1009 18:21:58.302963   34792 command_runner.go:130] > # stream_idle_timeout = ""
	I1009 18:21:58.302972   34792 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1009 18:21:58.302984   34792 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1009 18:21:58.303003   34792 command_runner.go:130] > # stream_tls_cert = ""
	I1009 18:21:58.303014   34792 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1009 18:21:58.303019   34792 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1009 18:21:58.303024   34792 command_runner.go:130] > # stream_tls_key = ""
	I1009 18:21:58.303031   34792 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1009 18:21:58.303041   34792 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1009 18:21:58.303054   34792 command_runner.go:130] > # automatically pick up the changes.
	I1009 18:21:58.303061   34792 command_runner.go:130] > # stream_tls_ca = ""
	I1009 18:21:58.303083   34792 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1009 18:21:58.303094   34792 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1009 18:21:58.303103   34792 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1009 18:21:58.303111   34792 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1009 18:21:58.303120   34792 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1009 18:21:58.303130   34792 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1009 18:21:58.303156   34792 command_runner.go:130] > [crio.runtime]
	I1009 18:21:58.303167   34792 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1009 18:21:58.303176   34792 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1009 18:21:58.303182   34792 command_runner.go:130] > # "nofile=1024:2048"
	I1009 18:21:58.303192   34792 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1009 18:21:58.303201   34792 command_runner.go:130] > # default_ulimits = [
	I1009 18:21:58.303207   34792 command_runner.go:130] > # ]
	I1009 18:21:58.303219   34792 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1009 18:21:58.303225   34792 command_runner.go:130] > # no_pivot = false
	I1009 18:21:58.303234   34792 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1009 18:21:58.303261   34792 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1009 18:21:58.303272   34792 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1009 18:21:58.303282   34792 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1009 18:21:58.303294   34792 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1009 18:21:58.303307   34792 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1009 18:21:58.303315   34792 command_runner.go:130] > # conmon = ""
	I1009 18:21:58.303321   34792 command_runner.go:130] > # Cgroup setting for conmon
	I1009 18:21:58.303330   34792 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1009 18:21:58.303336   34792 command_runner.go:130] > conmon_cgroup = "pod"
	I1009 18:21:58.303344   34792 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1009 18:21:58.303351   34792 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1009 18:21:58.303361   34792 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1009 18:21:58.303366   34792 command_runner.go:130] > # conmon_env = [
	I1009 18:21:58.303370   34792 command_runner.go:130] > # ]
	I1009 18:21:58.303377   34792 command_runner.go:130] > # Additional environment variables to set for all the
	I1009 18:21:58.303389   34792 command_runner.go:130] > # containers. These are overridden if set in the
	I1009 18:21:58.303398   34792 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1009 18:21:58.303404   34792 command_runner.go:130] > # default_env = [
	I1009 18:21:58.303408   34792 command_runner.go:130] > # ]
	I1009 18:21:58.303417   34792 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1009 18:21:58.303434   34792 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1009 18:21:58.303443   34792 command_runner.go:130] > # selinux = false
	I1009 18:21:58.303454   34792 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1009 18:21:58.303468   34792 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1009 18:21:58.303479   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.303489   34792 command_runner.go:130] > # seccomp_profile = ""
	I1009 18:21:58.303500   34792 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1009 18:21:58.303513   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.303520   34792 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1009 18:21:58.303530   34792 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1009 18:21:58.303543   34792 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1009 18:21:58.303553   34792 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1009 18:21:58.303567   34792 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1009 18:21:58.303578   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.303586   34792 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1009 18:21:58.303597   34792 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1009 18:21:58.303603   34792 command_runner.go:130] > # the cgroup blockio controller.
	I1009 18:21:58.303610   34792 command_runner.go:130] > # blockio_config_file = ""
	I1009 18:21:58.303625   34792 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1009 18:21:58.303631   34792 command_runner.go:130] > # blockio parameters.
	I1009 18:21:58.303639   34792 command_runner.go:130] > # blockio_reload = false
	I1009 18:21:58.303649   34792 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1009 18:21:58.303659   34792 command_runner.go:130] > # irqbalance daemon.
	I1009 18:21:58.303667   34792 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1009 18:21:58.303718   34792 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1009 18:21:58.303738   34792 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1009 18:21:58.303748   34792 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1009 18:21:58.303756   34792 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1009 18:21:58.303765   34792 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1009 18:21:58.303772   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.303777   34792 command_runner.go:130] > # rdt_config_file = ""
	I1009 18:21:58.303787   34792 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1009 18:21:58.303793   34792 command_runner.go:130] > # cgroup_manager = "systemd"
	I1009 18:21:58.303802   34792 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1009 18:21:58.303809   34792 command_runner.go:130] > # separate_pull_cgroup = ""
	I1009 18:21:58.303817   34792 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1009 18:21:58.303827   34792 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1009 18:21:58.303836   34792 command_runner.go:130] > # will be added.
	I1009 18:21:58.303844   34792 command_runner.go:130] > # default_capabilities = [
	I1009 18:21:58.303853   34792 command_runner.go:130] > # 	"CHOWN",
	I1009 18:21:58.303860   34792 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1009 18:21:58.303868   34792 command_runner.go:130] > # 	"FSETID",
	I1009 18:21:58.303874   34792 command_runner.go:130] > # 	"FOWNER",
	I1009 18:21:58.303883   34792 command_runner.go:130] > # 	"SETGID",
	I1009 18:21:58.303899   34792 command_runner.go:130] > # 	"SETUID",
	I1009 18:21:58.303908   34792 command_runner.go:130] > # 	"SETPCAP",
	I1009 18:21:58.303916   34792 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1009 18:21:58.303925   34792 command_runner.go:130] > # 	"KILL",
	I1009 18:21:58.303931   34792 command_runner.go:130] > # ]
	I1009 18:21:58.303944   34792 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1009 18:21:58.303958   34792 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1009 18:21:58.303969   34792 command_runner.go:130] > # add_inheritable_capabilities = false
	I1009 18:21:58.303982   34792 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1009 18:21:58.304001   34792 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1009 18:21:58.304011   34792 command_runner.go:130] > default_sysctls = [
	I1009 18:21:58.304018   34792 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1009 18:21:58.304025   34792 command_runner.go:130] > ]
	I1009 18:21:58.304033   34792 command_runner.go:130] > # List of devices on the host that a
	I1009 18:21:58.304046   34792 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1009 18:21:58.304055   34792 command_runner.go:130] > # allowed_devices = [
	I1009 18:21:58.304063   34792 command_runner.go:130] > # 	"/dev/fuse",
	I1009 18:21:58.304071   34792 command_runner.go:130] > # 	"/dev/net/tun",
	I1009 18:21:58.304077   34792 command_runner.go:130] > # ]
	I1009 18:21:58.304088   34792 command_runner.go:130] > # List of additional devices. specified as
	I1009 18:21:58.304102   34792 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1009 18:21:58.304113   34792 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1009 18:21:58.304124   34792 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1009 18:21:58.304153   34792 command_runner.go:130] > # additional_devices = [
	I1009 18:21:58.304163   34792 command_runner.go:130] > # ]
	I1009 18:21:58.304172   34792 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1009 18:21:58.304182   34792 command_runner.go:130] > # cdi_spec_dirs = [
	I1009 18:21:58.304188   34792 command_runner.go:130] > # 	"/etc/cdi",
	I1009 18:21:58.304197   34792 command_runner.go:130] > # 	"/var/run/cdi",
	I1009 18:21:58.304202   34792 command_runner.go:130] > # ]
	I1009 18:21:58.304212   34792 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1009 18:21:58.304225   34792 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1009 18:21:58.304234   34792 command_runner.go:130] > # Defaults to false.
	I1009 18:21:58.304243   34792 command_runner.go:130] > # device_ownership_from_security_context = false
	I1009 18:21:58.304257   34792 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1009 18:21:58.304269   34792 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1009 18:21:58.304278   34792 command_runner.go:130] > # hooks_dir = [
	I1009 18:21:58.304287   34792 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1009 18:21:58.304294   34792 command_runner.go:130] > # ]
	I1009 18:21:58.304304   34792 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1009 18:21:58.304317   34792 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1009 18:21:58.304329   34792 command_runner.go:130] > # its default mounts from the following two files:
	I1009 18:21:58.304337   34792 command_runner.go:130] > #
	I1009 18:21:58.304347   34792 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1009 18:21:58.304361   34792 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1009 18:21:58.304382   34792 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1009 18:21:58.304389   34792 command_runner.go:130] > #
	I1009 18:21:58.304399   34792 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1009 18:21:58.304413   34792 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1009 18:21:58.304427   34792 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1009 18:21:58.304438   34792 command_runner.go:130] > #      only add mounts it finds in this file.
	I1009 18:21:58.304447   34792 command_runner.go:130] > #
	I1009 18:21:58.304455   34792 command_runner.go:130] > # default_mounts_file = ""
	I1009 18:21:58.304466   34792 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1009 18:21:58.304479   34792 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1009 18:21:58.304494   34792 command_runner.go:130] > # pids_limit = -1
	I1009 18:21:58.304508   34792 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1009 18:21:58.304521   34792 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1009 18:21:58.304532   34792 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1009 18:21:58.304547   34792 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1009 18:21:58.304557   34792 command_runner.go:130] > # log_size_max = -1
	I1009 18:21:58.304569   34792 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1009 18:21:58.304578   34792 command_runner.go:130] > # log_to_journald = false
	I1009 18:21:58.304601   34792 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1009 18:21:58.304614   34792 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1009 18:21:58.304622   34792 command_runner.go:130] > # Path to directory for container attach sockets.
	I1009 18:21:58.304634   34792 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1009 18:21:58.304647   34792 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1009 18:21:58.304657   34792 command_runner.go:130] > # bind_mount_prefix = ""
	I1009 18:21:58.304669   34792 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1009 18:21:58.304677   34792 command_runner.go:130] > # read_only = false
	I1009 18:21:58.304688   34792 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1009 18:21:58.304700   34792 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1009 18:21:58.304708   34792 command_runner.go:130] > # live configuration reload.
	I1009 18:21:58.304716   34792 command_runner.go:130] > # log_level = "info"
	I1009 18:21:58.304726   34792 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1009 18:21:58.304737   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.304746   34792 command_runner.go:130] > # log_filter = ""
	I1009 18:21:58.304761   34792 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1009 18:21:58.304773   34792 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1009 18:21:58.304781   34792 command_runner.go:130] > # separated by comma.
	I1009 18:21:58.304795   34792 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 18:21:58.304805   34792 command_runner.go:130] > # uid_mappings = ""
	I1009 18:21:58.304815   34792 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1009 18:21:58.304827   34792 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1009 18:21:58.304837   34792 command_runner.go:130] > # separated by comma.
	I1009 18:21:58.304849   34792 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 18:21:58.304863   34792 command_runner.go:130] > # gid_mappings = ""
	I1009 18:21:58.304890   34792 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1009 18:21:58.304904   34792 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1009 18:21:58.304916   34792 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1009 18:21:58.304929   34792 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 18:21:58.304939   34792 command_runner.go:130] > # minimum_mappable_uid = -1
	I1009 18:21:58.304949   34792 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1009 18:21:58.304961   34792 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1009 18:21:58.304971   34792 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1009 18:21:58.304986   34792 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 18:21:58.305032   34792 command_runner.go:130] > # minimum_mappable_gid = -1
	I1009 18:21:58.305045   34792 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1009 18:21:58.305054   34792 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1009 18:21:58.305063   34792 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1009 18:21:58.305074   34792 command_runner.go:130] > # ctr_stop_timeout = 30
	I1009 18:21:58.305084   34792 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1009 18:21:58.305097   34792 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1009 18:21:58.305106   34792 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1009 18:21:58.305116   34792 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1009 18:21:58.305124   34792 command_runner.go:130] > # drop_infra_ctr = true
	I1009 18:21:58.305148   34792 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1009 18:21:58.305162   34792 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1009 18:21:58.305177   34792 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1009 18:21:58.305185   34792 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1009 18:21:58.305197   34792 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1009 18:21:58.305209   34792 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1009 18:21:58.305222   34792 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1009 18:21:58.305233   34792 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1009 18:21:58.305241   34792 command_runner.go:130] > # shared_cpuset = ""
	I1009 18:21:58.305251   34792 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1009 18:21:58.305262   34792 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1009 18:21:58.305270   34792 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1009 18:21:58.305284   34792 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1009 18:21:58.305293   34792 command_runner.go:130] > # pinns_path = ""
	I1009 18:21:58.305305   34792 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1009 18:21:58.305318   34792 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1009 18:21:58.305328   34792 command_runner.go:130] > # enable_criu_support = true
	I1009 18:21:58.305337   34792 command_runner.go:130] > # Enable/disable the generation of the container,
	I1009 18:21:58.305350   34792 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1009 18:21:58.305359   34792 command_runner.go:130] > # enable_pod_events = false
	I1009 18:21:58.305371   34792 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1009 18:21:58.305382   34792 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1009 18:21:58.305389   34792 command_runner.go:130] > # default_runtime = "crun"
	I1009 18:21:58.305401   34792 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1009 18:21:58.305415   34792 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1009 18:21:58.305432   34792 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1009 18:21:58.305444   34792 command_runner.go:130] > # creation as a file is not desired either.
	I1009 18:21:58.305460   34792 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1009 18:21:58.305471   34792 command_runner.go:130] > # the hostname is being managed dynamically.
	I1009 18:21:58.305480   34792 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1009 18:21:58.305488   34792 command_runner.go:130] > # ]
	I1009 18:21:58.305499   34792 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1009 18:21:58.305512   34792 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1009 18:21:58.305524   34792 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1009 18:21:58.305535   34792 command_runner.go:130] > # Each entry in the table should follow the format:
	I1009 18:21:58.305542   34792 command_runner.go:130] > #
	I1009 18:21:58.305551   34792 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1009 18:21:58.305561   34792 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1009 18:21:58.305570   34792 command_runner.go:130] > # runtime_type = "oci"
	I1009 18:21:58.305582   34792 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1009 18:21:58.305590   34792 command_runner.go:130] > # inherit_default_runtime = false
	I1009 18:21:58.305601   34792 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1009 18:21:58.305611   34792 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1009 18:21:58.305619   34792 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1009 18:21:58.305628   34792 command_runner.go:130] > # monitor_env = []
	I1009 18:21:58.305638   34792 command_runner.go:130] > # privileged_without_host_devices = false
	I1009 18:21:58.305647   34792 command_runner.go:130] > # allowed_annotations = []
	I1009 18:21:58.305665   34792 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1009 18:21:58.305674   34792 command_runner.go:130] > # no_sync_log = false
	I1009 18:21:58.305681   34792 command_runner.go:130] > # default_annotations = {}
	I1009 18:21:58.305690   34792 command_runner.go:130] > # stream_websockets = false
	I1009 18:21:58.305697   34792 command_runner.go:130] > # seccomp_profile = ""
	I1009 18:21:58.305730   34792 command_runner.go:130] > # Where:
	I1009 18:21:58.305743   34792 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1009 18:21:58.305756   34792 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1009 18:21:58.305769   34792 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1009 18:21:58.305779   34792 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1009 18:21:58.305788   34792 command_runner.go:130] > #   in $PATH.
	I1009 18:21:58.305800   34792 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1009 18:21:58.305811   34792 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1009 18:21:58.305823   34792 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1009 18:21:58.305832   34792 command_runner.go:130] > #   state.
	I1009 18:21:58.305842   34792 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1009 18:21:58.305854   34792 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1009 18:21:58.305865   34792 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1009 18:21:58.305877   34792 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1009 18:21:58.305888   34792 command_runner.go:130] > #   the values from the default runtime on load time.
	I1009 18:21:58.305902   34792 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1009 18:21:58.305914   34792 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1009 18:21:58.305928   34792 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1009 18:21:58.305940   34792 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1009 18:21:58.305948   34792 command_runner.go:130] > #   The currently recognized values are:
	I1009 18:21:58.305962   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1009 18:21:58.305977   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1009 18:21:58.305989   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1009 18:21:58.306007   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1009 18:21:58.306022   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1009 18:21:58.306036   34792 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1009 18:21:58.306050   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1009 18:21:58.306061   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1009 18:21:58.306082   34792 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1009 18:21:58.306095   34792 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1009 18:21:58.306109   34792 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1009 18:21:58.306121   34792 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1009 18:21:58.306132   34792 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1009 18:21:58.306154   34792 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1009 18:21:58.306166   34792 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1009 18:21:58.306181   34792 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1009 18:21:58.306194   34792 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1009 18:21:58.306204   34792 command_runner.go:130] > #   deprecated option "conmon".
	I1009 18:21:58.306216   34792 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1009 18:21:58.306226   34792 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1009 18:21:58.306240   34792 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1009 18:21:58.306250   34792 command_runner.go:130] > #   should be moved to the container's cgroup
	I1009 18:21:58.306260   34792 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1009 18:21:58.306271   34792 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1009 18:21:58.306285   34792 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1009 18:21:58.306294   34792 command_runner.go:130] > #   conmon-rs by using:
	I1009 18:21:58.306306   34792 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1009 18:21:58.306321   34792 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1009 18:21:58.306336   34792 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1009 18:21:58.306350   34792 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1009 18:21:58.306363   34792 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1009 18:21:58.306378   34792 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1009 18:21:58.306392   34792 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1009 18:21:58.306402   34792 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1009 18:21:58.306417   34792 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1009 18:21:58.306431   34792 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1009 18:21:58.306441   34792 command_runner.go:130] > #   when a machine crash happens.
	I1009 18:21:58.306452   34792 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1009 18:21:58.306467   34792 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1009 18:21:58.306481   34792 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1009 18:21:58.306492   34792 command_runner.go:130] > #   seccomp profile for the runtime.
	I1009 18:21:58.306506   34792 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1009 18:21:58.306520   34792 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1009 18:21:58.306525   34792 command_runner.go:130] > #
	I1009 18:21:58.306534   34792 command_runner.go:130] > # Using the seccomp notifier feature:
	I1009 18:21:58.306542   34792 command_runner.go:130] > #
	I1009 18:21:58.306552   34792 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1009 18:21:58.306565   34792 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1009 18:21:58.306574   34792 command_runner.go:130] > #
	I1009 18:21:58.306584   34792 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1009 18:21:58.306597   34792 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1009 18:21:58.306605   34792 command_runner.go:130] > #
	I1009 18:21:58.306615   34792 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1009 18:21:58.306623   34792 command_runner.go:130] > # feature.
	I1009 18:21:58.306629   34792 command_runner.go:130] > #
	I1009 18:21:58.306641   34792 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1009 18:21:58.306654   34792 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1009 18:21:58.306667   34792 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1009 18:21:58.306680   34792 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1009 18:21:58.306692   34792 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1009 18:21:58.306700   34792 command_runner.go:130] > #
	I1009 18:21:58.306710   34792 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1009 18:21:58.306723   34792 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1009 18:21:58.306730   34792 command_runner.go:130] > #
	I1009 18:21:58.306740   34792 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1009 18:21:58.306752   34792 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1009 18:21:58.306760   34792 command_runner.go:130] > #
	I1009 18:21:58.306770   34792 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1009 18:21:58.306782   34792 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1009 18:21:58.306788   34792 command_runner.go:130] > # limitation.
	I1009 18:21:58.306798   34792 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1009 18:21:58.306809   34792 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1009 18:21:58.306818   34792 command_runner.go:130] > runtime_type = ""
	I1009 18:21:58.306825   34792 command_runner.go:130] > runtime_root = "/run/crun"
	I1009 18:21:58.306837   34792 command_runner.go:130] > inherit_default_runtime = false
	I1009 18:21:58.306847   34792 command_runner.go:130] > runtime_config_path = ""
	I1009 18:21:58.306853   34792 command_runner.go:130] > container_min_memory = ""
	I1009 18:21:58.306863   34792 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1009 18:21:58.306870   34792 command_runner.go:130] > monitor_cgroup = "pod"
	I1009 18:21:58.306879   34792 command_runner.go:130] > monitor_exec_cgroup = ""
	I1009 18:21:58.306888   34792 command_runner.go:130] > allowed_annotations = [
	I1009 18:21:58.306898   34792 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1009 18:21:58.306904   34792 command_runner.go:130] > ]
	I1009 18:21:58.306914   34792 command_runner.go:130] > privileged_without_host_devices = false
	I1009 18:21:58.306921   34792 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1009 18:21:58.306931   34792 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1009 18:21:58.306937   34792 command_runner.go:130] > runtime_type = ""
	I1009 18:21:58.306944   34792 command_runner.go:130] > runtime_root = "/run/runc"
	I1009 18:21:58.306952   34792 command_runner.go:130] > inherit_default_runtime = false
	I1009 18:21:58.306962   34792 command_runner.go:130] > runtime_config_path = ""
	I1009 18:21:58.306970   34792 command_runner.go:130] > container_min_memory = ""
	I1009 18:21:58.306980   34792 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1009 18:21:58.306989   34792 command_runner.go:130] > monitor_cgroup = "pod"
	I1009 18:21:58.307006   34792 command_runner.go:130] > monitor_exec_cgroup = ""
	I1009 18:21:58.307017   34792 command_runner.go:130] > privileged_without_host_devices = false
	I1009 18:21:58.307031   34792 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1009 18:21:58.307040   34792 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1009 18:21:58.307053   34792 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1009 18:21:58.307068   34792 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1009 18:21:58.307088   34792 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1009 18:21:58.307107   34792 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1009 18:21:58.307121   34792 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1009 18:21:58.307130   34792 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1009 18:21:58.307160   34792 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1009 18:21:58.307179   34792 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1009 18:21:58.307192   34792 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1009 18:21:58.307206   34792 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1009 18:21:58.307215   34792 command_runner.go:130] > # Example:
	I1009 18:21:58.307224   34792 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1009 18:21:58.307234   34792 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1009 18:21:58.307244   34792 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1009 18:21:58.307253   34792 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1009 18:21:58.307262   34792 command_runner.go:130] > # cpuset = "0-1"
	I1009 18:21:58.307269   34792 command_runner.go:130] > # cpushares = "5"
	I1009 18:21:58.307278   34792 command_runner.go:130] > # cpuquota = "1000"
	I1009 18:21:58.307285   34792 command_runner.go:130] > # cpuperiod = "100000"
	I1009 18:21:58.307294   34792 command_runner.go:130] > # cpulimit = "35"
	I1009 18:21:58.307301   34792 command_runner.go:130] > # Where:
	I1009 18:21:58.307309   34792 command_runner.go:130] > # The workload name is workload-type.
	I1009 18:21:58.307323   34792 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1009 18:21:58.307336   34792 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1009 18:21:58.307349   34792 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1009 18:21:58.307365   34792 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1009 18:21:58.307377   34792 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1009 18:21:58.307388   34792 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1009 18:21:58.307399   34792 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1009 18:21:58.307410   34792 command_runner.go:130] > # Default value is set to true
	I1009 18:21:58.307418   34792 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1009 18:21:58.307430   34792 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1009 18:21:58.307440   34792 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1009 18:21:58.307449   34792 command_runner.go:130] > # Default value is set to 'false'
	I1009 18:21:58.307462   34792 command_runner.go:130] > # disable_hostport_mapping = false
	I1009 18:21:58.307474   34792 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1009 18:21:58.307487   34792 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1009 18:21:58.307495   34792 command_runner.go:130] > # timezone = ""
	I1009 18:21:58.307506   34792 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1009 18:21:58.307513   34792 command_runner.go:130] > #
	I1009 18:21:58.307523   34792 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1009 18:21:58.307536   34792 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1009 18:21:58.307544   34792 command_runner.go:130] > [crio.image]
	I1009 18:21:58.307556   34792 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1009 18:21:58.307566   34792 command_runner.go:130] > # default_transport = "docker://"
	I1009 18:21:58.307578   34792 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1009 18:21:58.307591   34792 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1009 18:21:58.307600   34792 command_runner.go:130] > # global_auth_file = ""
	I1009 18:21:58.307608   34792 command_runner.go:130] > # The image used to instantiate infra containers.
	I1009 18:21:58.307620   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.307630   34792 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1009 18:21:58.307641   34792 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1009 18:21:58.307654   34792 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1009 18:21:58.307665   34792 command_runner.go:130] > # This option supports live configuration reload.
	I1009 18:21:58.307675   34792 command_runner.go:130] > # pause_image_auth_file = ""
	I1009 18:21:58.307686   34792 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1009 18:21:58.307698   34792 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1009 18:21:58.307708   34792 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1009 18:21:58.307719   34792 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1009 18:21:58.307727   34792 command_runner.go:130] > # pause_command = "/pause"
	I1009 18:21:58.307740   34792 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1009 18:21:58.307753   34792 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1009 18:21:58.307765   34792 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1009 18:21:58.307777   34792 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1009 18:21:58.307789   34792 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1009 18:21:58.307802   34792 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1009 18:21:58.307811   34792 command_runner.go:130] > # pinned_images = [
	I1009 18:21:58.307819   34792 command_runner.go:130] > # ]
	I1009 18:21:58.307830   34792 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1009 18:21:58.307842   34792 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1009 18:21:58.307855   34792 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1009 18:21:58.307868   34792 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1009 18:21:58.307879   34792 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1009 18:21:58.307887   34792 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1009 18:21:58.307899   34792 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1009 18:21:58.307912   34792 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1009 18:21:58.307930   34792 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1009 18:21:58.307943   34792 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1009 18:21:58.307955   34792 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1009 18:21:58.307971   34792 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1009 18:21:58.307982   34792 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1009 18:21:58.308001   34792 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1009 18:21:58.308010   34792 command_runner.go:130] > # changing them here.
	I1009 18:21:58.308020   34792 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1009 18:21:58.308029   34792 command_runner.go:130] > # insecure_registries = [
	I1009 18:21:58.308035   34792 command_runner.go:130] > # ]
	I1009 18:21:58.308049   34792 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1009 18:21:58.308059   34792 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1009 18:21:58.308067   34792 command_runner.go:130] > # image_volumes = "mkdir"
	I1009 18:21:58.308079   34792 command_runner.go:130] > # Temporary directory to use for storing big files
	I1009 18:21:58.308089   34792 command_runner.go:130] > # big_files_temporary_dir = ""
	I1009 18:21:58.308100   34792 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1009 18:21:58.308114   34792 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1009 18:21:58.308123   34792 command_runner.go:130] > # auto_reload_registries = false
	I1009 18:21:58.308133   34792 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1009 18:21:58.308163   34792 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1009 18:21:58.308174   34792 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1009 18:21:58.308183   34792 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1009 18:21:58.308191   34792 command_runner.go:130] > # The mode of short name resolution.
	I1009 18:21:58.308205   34792 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1009 18:21:58.308219   34792 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1009 18:21:58.308230   34792 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1009 18:21:58.308238   34792 command_runner.go:130] > # short_name_mode = "enforcing"
	I1009 18:21:58.308250   34792 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1009 18:21:58.308261   34792 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1009 18:21:58.308271   34792 command_runner.go:130] > # oci_artifact_mount_support = true
	I1009 18:21:58.308282   34792 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1009 18:21:58.308291   34792 command_runner.go:130] > # CNI plugins.
	I1009 18:21:58.308297   34792 command_runner.go:130] > [crio.network]
	I1009 18:21:58.308312   34792 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1009 18:21:58.308324   34792 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1009 18:21:58.308334   34792 command_runner.go:130] > # cni_default_network = ""
	I1009 18:21:58.308345   34792 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1009 18:21:58.308355   34792 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1009 18:21:58.308365   34792 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1009 18:21:58.308373   34792 command_runner.go:130] > # plugin_dirs = [
	I1009 18:21:58.308380   34792 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1009 18:21:58.308388   34792 command_runner.go:130] > # ]
	I1009 18:21:58.308395   34792 command_runner.go:130] > # List of included pod metrics.
	I1009 18:21:58.308404   34792 command_runner.go:130] > # included_pod_metrics = [
	I1009 18:21:58.308411   34792 command_runner.go:130] > # ]
	I1009 18:21:58.308423   34792 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1009 18:21:58.308429   34792 command_runner.go:130] > [crio.metrics]
	I1009 18:21:58.308440   34792 command_runner.go:130] > # Globally enable or disable metrics support.
	I1009 18:21:58.308447   34792 command_runner.go:130] > # enable_metrics = false
	I1009 18:21:58.308457   34792 command_runner.go:130] > # Specify enabled metrics collectors.
	I1009 18:21:58.308466   34792 command_runner.go:130] > # Per default all metrics are enabled.
	I1009 18:21:58.308479   34792 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1009 18:21:58.308492   34792 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1009 18:21:58.308504   34792 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1009 18:21:58.308514   34792 command_runner.go:130] > # metrics_collectors = [
	I1009 18:21:58.308520   34792 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1009 18:21:58.308525   34792 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1009 18:21:58.308530   34792 command_runner.go:130] > # 	"containers_oom_total",
	I1009 18:21:58.308535   34792 command_runner.go:130] > # 	"processes_defunct",
	I1009 18:21:58.308540   34792 command_runner.go:130] > # 	"operations_total",
	I1009 18:21:58.308546   34792 command_runner.go:130] > # 	"operations_latency_seconds",
	I1009 18:21:58.308553   34792 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1009 18:21:58.308560   34792 command_runner.go:130] > # 	"operations_errors_total",
	I1009 18:21:58.308567   34792 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1009 18:21:58.308574   34792 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1009 18:21:58.308581   34792 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1009 18:21:58.308590   34792 command_runner.go:130] > # 	"image_pulls_success_total",
	I1009 18:21:58.308598   34792 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1009 18:21:58.308605   34792 command_runner.go:130] > # 	"containers_oom_count_total",
	I1009 18:21:58.308613   34792 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1009 18:21:58.308620   34792 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1009 18:21:58.308630   34792 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1009 18:21:58.308635   34792 command_runner.go:130] > # ]
	I1009 18:21:58.308646   34792 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1009 18:21:58.308656   34792 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1009 18:21:58.308664   34792 command_runner.go:130] > # The port on which the metrics server will listen.
	I1009 18:21:58.308673   34792 command_runner.go:130] > # metrics_port = 9090
	I1009 18:21:58.308682   34792 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1009 18:21:58.308691   34792 command_runner.go:130] > # metrics_socket = ""
	I1009 18:21:58.308699   34792 command_runner.go:130] > # The certificate for the secure metrics server.
	I1009 18:21:58.308713   34792 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1009 18:21:58.308726   34792 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1009 18:21:58.308736   34792 command_runner.go:130] > # certificate on any modification event.
	I1009 18:21:58.308743   34792 command_runner.go:130] > # metrics_cert = ""
	I1009 18:21:58.308754   34792 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1009 18:21:58.308765   34792 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1009 18:21:58.308774   34792 command_runner.go:130] > # metrics_key = ""
	I1009 18:21:58.308785   34792 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1009 18:21:58.308793   34792 command_runner.go:130] > [crio.tracing]
	I1009 18:21:58.308803   34792 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1009 18:21:58.308812   34792 command_runner.go:130] > # enable_tracing = false
	I1009 18:21:58.308821   34792 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1009 18:21:58.308831   34792 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1009 18:21:58.308842   34792 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1009 18:21:58.308854   34792 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1009 18:21:58.308864   34792 command_runner.go:130] > # CRI-O NRI configuration.
	I1009 18:21:58.308871   34792 command_runner.go:130] > [crio.nri]
	I1009 18:21:58.308879   34792 command_runner.go:130] > # Globally enable or disable NRI.
	I1009 18:21:58.308888   34792 command_runner.go:130] > # enable_nri = true
	I1009 18:21:58.308908   34792 command_runner.go:130] > # NRI socket to listen on.
	I1009 18:21:58.308919   34792 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1009 18:21:58.308926   34792 command_runner.go:130] > # NRI plugin directory to use.
	I1009 18:21:58.308934   34792 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1009 18:21:58.308945   34792 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1009 18:21:58.308955   34792 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1009 18:21:58.308967   34792 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1009 18:21:58.309020   34792 command_runner.go:130] > # nri_disable_connections = false
	I1009 18:21:58.309031   34792 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1009 18:21:58.309039   34792 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1009 18:21:58.309050   34792 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1009 18:21:58.309060   34792 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1009 18:21:58.309070   34792 command_runner.go:130] > # NRI default validator configuration.
	I1009 18:21:58.309081   34792 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1009 18:21:58.309094   34792 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1009 18:21:58.309105   34792 command_runner.go:130] > # can be restricted/rejected:
	I1009 18:21:58.309114   34792 command_runner.go:130] > # - OCI hook injection
	I1009 18:21:58.309123   34792 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1009 18:21:58.309144   34792 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1009 18:21:58.309154   34792 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1009 18:21:58.309164   34792 command_runner.go:130] > # - adjustment of linux namespaces
	I1009 18:21:58.309174   34792 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1009 18:21:58.309187   34792 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1009 18:21:58.309199   34792 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1009 18:21:58.309206   34792 command_runner.go:130] > #
	I1009 18:21:58.309213   34792 command_runner.go:130] > # [crio.nri.default_validator]
	I1009 18:21:58.309228   34792 command_runner.go:130] > # nri_enable_default_validator = false
	I1009 18:21:58.309239   34792 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1009 18:21:58.309249   34792 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1009 18:21:58.309259   34792 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1009 18:21:58.309270   34792 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1009 18:21:58.309282   34792 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1009 18:21:58.309292   34792 command_runner.go:130] > # nri_validator_required_plugins = [
	I1009 18:21:58.309300   34792 command_runner.go:130] > # ]
	I1009 18:21:58.309310   34792 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1009 18:21:58.309320   34792 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1009 18:21:58.309329   34792 command_runner.go:130] > [crio.stats]
	I1009 18:21:58.309338   34792 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1009 18:21:58.309350   34792 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1009 18:21:58.309361   34792 command_runner.go:130] > # stats_collection_period = 0
	I1009 18:21:58.309373   34792 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1009 18:21:58.309386   34792 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1009 18:21:58.309395   34792 command_runner.go:130] > # collection_period = 0
	I1009 18:21:58.309439   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.287848676Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1009 18:21:58.309455   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.287874416Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1009 18:21:58.309486   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.28789246Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1009 18:21:58.309504   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.287909281Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1009 18:21:58.309520   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.287966347Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:21:58.309548   34792 command_runner.go:130] ! time="2025-10-09T18:21:58.288147535Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1009 18:21:58.309568   34792 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1009 18:21:58.309652   34792 cni.go:84] Creating CNI manager for ""
	I1009 18:21:58.309667   34792 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:21:58.309686   34792 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:21:58.309718   34792 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753440 NodeName:functional-753440 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:21:58.309867   34792 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-753440"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:21:58.309941   34792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:21:58.317943   34792 command_runner.go:130] > kubeadm
	I1009 18:21:58.317964   34792 command_runner.go:130] > kubectl
	I1009 18:21:58.317972   34792 command_runner.go:130] > kubelet
	I1009 18:21:58.317992   34792 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:21:58.318041   34792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:21:58.325700   34792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 18:21:58.338455   34792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:21:58.350701   34792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1009 18:21:58.362930   34792 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 18:21:58.366724   34792 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1009 18:21:58.366809   34792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:21:58.451602   34792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:21:58.464478   34792 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440 for IP: 192.168.49.2
	I1009 18:21:58.464503   34792 certs.go:195] generating shared ca certs ...
	I1009 18:21:58.464518   34792 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:21:58.464657   34792 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 18:21:58.464699   34792 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 18:21:58.464708   34792 certs.go:257] generating profile certs ...
	I1009 18:21:58.464789   34792 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.key
	I1009 18:21:58.464832   34792 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.key.01289d3a
	I1009 18:21:58.464870   34792 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.key
	I1009 18:21:58.464880   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 18:21:58.464891   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 18:21:58.464904   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 18:21:58.464914   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 18:21:58.464926   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 18:21:58.464938   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 18:21:58.464950   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 18:21:58.464961   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 18:21:58.465007   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 18:21:58.465033   34792 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 18:21:58.465040   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:21:58.465060   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 18:21:58.465083   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:21:58.465117   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 18:21:58.465182   34792 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:21:58.465212   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem -> /usr/share/ca-certificates/14880.pem
	I1009 18:21:58.465226   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /usr/share/ca-certificates/148802.pem
	I1009 18:21:58.465252   34792 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:21:58.465730   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:21:58.483386   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:21:58.500383   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:21:58.517315   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:21:58.533903   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 18:21:58.550845   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:21:58.567242   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:21:58.584667   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:21:58.601626   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 18:21:58.618749   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 18:21:58.635789   34792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:21:58.652270   34792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:21:58.664508   34792 ssh_runner.go:195] Run: openssl version
	I1009 18:21:58.670569   34792 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1009 18:21:58.670643   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:21:58.679189   34792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:21:58.683037   34792 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:21:58.683067   34792 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:21:58.683111   34792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:21:58.716325   34792 command_runner.go:130] > b5213941
	I1009 18:21:58.716574   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:21:58.724647   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 18:21:58.732750   34792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 18:21:58.736237   34792 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:21:58.736342   34792 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:21:58.736392   34792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 18:21:58.769488   34792 command_runner.go:130] > 51391683
	I1009 18:21:58.769675   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 18:21:58.778213   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 18:21:58.786758   34792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 18:21:58.790431   34792 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:21:58.790472   34792 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:21:58.790516   34792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 18:21:58.824579   34792 command_runner.go:130] > 3ec20f2e
	I1009 18:21:58.824670   34792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:21:58.832975   34792 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:21:58.836722   34792 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:21:58.836745   34792 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1009 18:21:58.836750   34792 command_runner.go:130] > Device: 8,1	Inode: 583629      Links: 1
	I1009 18:21:58.836756   34792 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 18:21:58.836762   34792 command_runner.go:130] > Access: 2025-10-09 18:17:52.024667536 +0000
	I1009 18:21:58.836766   34792 command_runner.go:130] > Modify: 2025-10-09 18:13:46.346674317 +0000
	I1009 18:21:58.836771   34792 command_runner.go:130] > Change: 2025-10-09 18:13:46.346674317 +0000
	I1009 18:21:58.836775   34792 command_runner.go:130] >  Birth: 2025-10-09 18:13:46.346674317 +0000
	I1009 18:21:58.836829   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 18:21:58.871297   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:58.871384   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 18:21:58.905951   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:58.906293   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 18:21:58.941072   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:58.941180   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 18:21:58.975637   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:58.975713   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 18:21:59.010686   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:59.010763   34792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 18:21:59.045288   34792 command_runner.go:130] > Certificate will not expire
	I1009 18:21:59.045372   34792 kubeadm.go:400] StartCluster: {Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:21:59.045468   34792 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:21:59.045548   34792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:21:59.072734   34792 cri.go:89] found id: ""
	I1009 18:21:59.072811   34792 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:21:59.080291   34792 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1009 18:21:59.080312   34792 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1009 18:21:59.080317   34792 command_runner.go:130] > /var/lib/minikube/etcd:
	I1009 18:21:59.080960   34792 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 18:21:59.080977   34792 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 18:21:59.081028   34792 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 18:21:59.088791   34792 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:21:59.088891   34792 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-753440" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:21:59.088923   34792 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-11374/kubeconfig needs updating (will repair): [kubeconfig missing "functional-753440" cluster setting kubeconfig missing "functional-753440" context setting]
	I1009 18:21:59.089226   34792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/kubeconfig: {Name:mke7bf8fc0811179129dfd61e3a963860adf8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:21:59.115972   34792 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:21:59.116113   34792 kapi.go:59] client config for functional-753440: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 18:21:59.116551   34792 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 18:21:59.116565   34792 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 18:21:59.116570   34792 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 18:21:59.116574   34792 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 18:21:59.116578   34792 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 18:21:59.116681   34792 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 18:21:59.116939   34792 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 18:21:59.125251   34792 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 18:21:59.125284   34792 kubeadm.go:601] duration metric: took 44.302105ms to restartPrimaryControlPlane
	I1009 18:21:59.125294   34792 kubeadm.go:402] duration metric: took 79.928873ms to StartCluster
	I1009 18:21:59.125313   34792 settings.go:142] acquiring lock: {Name:mke1fc24bd3c282bdce5b5999d4611ed242ead0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:21:59.125417   34792 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:21:59.125977   34792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/kubeconfig: {Name:mke7bf8fc0811179129dfd61e3a963860adf8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:21:59.126266   34792 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:21:59.126330   34792 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 18:21:59.126472   34792 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:21:59.126485   34792 addons.go:69] Setting default-storageclass=true in profile "functional-753440"
	I1009 18:21:59.126503   34792 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-753440"
	I1009 18:21:59.126475   34792 addons.go:69] Setting storage-provisioner=true in profile "functional-753440"
	I1009 18:21:59.126533   34792 addons.go:238] Setting addon storage-provisioner=true in "functional-753440"
	I1009 18:21:59.126575   34792 host.go:66] Checking if "functional-753440" exists ...
	I1009 18:21:59.126787   34792 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
	I1009 18:21:59.126953   34792 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
	I1009 18:21:59.129433   34792 out.go:179] * Verifying Kubernetes components...
	I1009 18:21:59.130694   34792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:21:59.147348   34792 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:21:59.147489   34792 kapi.go:59] client config for functional-753440: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 18:21:59.147681   34792 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 18:21:59.147763   34792 addons.go:238] Setting addon default-storageclass=true in "functional-753440"
	I1009 18:21:59.147799   34792 host.go:66] Checking if "functional-753440" exists ...
	I1009 18:21:59.148103   34792 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
	I1009 18:21:59.149131   34792 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:21:59.149169   34792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 18:21:59.149223   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:59.172020   34792 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 18:21:59.172047   34792 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 18:21:59.172108   34792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:21:59.172953   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:59.190936   34792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:21:59.227445   34792 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:21:59.240811   34792 node_ready.go:35] waiting up to 6m0s for node "functional-753440" to be "Ready" ...
	I1009 18:21:59.240954   34792 type.go:168] "Request Body" body=""
	I1009 18:21:59.241028   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:21:59.241430   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:21:59.284375   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:21:59.300190   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:21:59.338559   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.338609   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.338653   34792 retry.go:31] will retry after 183.514108ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.353053   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.353121   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.353157   34792 retry.go:31] will retry after 252.751171ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.522422   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:21:59.573424   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.575988   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.576058   34792 retry.go:31] will retry after 293.779687ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.606194   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:21:59.660438   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.660484   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.660501   34792 retry.go:31] will retry after 279.387954ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.741722   34792 type.go:168] "Request Body" body=""
	I1009 18:21:59.741829   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:21:59.742206   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:21:59.870497   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:21:59.921333   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.923563   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.923589   34792 retry.go:31] will retry after 737.997993ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.940822   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:21:59.989898   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:21:59.992209   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:21:59.992239   34792 retry.go:31] will retry after 533.533276ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:00.241740   34792 type.go:168] "Request Body" body=""
	I1009 18:22:00.241807   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:00.242177   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:00.526746   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:00.575738   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:00.578103   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:00.578131   34792 retry.go:31] will retry after 930.387704ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:00.662455   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:00.715389   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:00.715427   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:00.715452   34792 retry.go:31] will retry after 867.874306ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:00.741572   34792 type.go:168] "Request Body" body=""
	I1009 18:22:00.741637   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:00.741979   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:01.241687   34792 type.go:168] "Request Body" body=""
	I1009 18:22:01.241751   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:01.242091   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:01.242159   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:01.509541   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:01.558188   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:01.560577   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:01.560605   34792 retry.go:31] will retry after 1.199996419s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:01.583824   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:01.634758   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:01.634811   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:01.634834   34792 retry.go:31] will retry after 674.661756ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:01.741022   34792 type.go:168] "Request Body" body=""
	I1009 18:22:01.741106   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:01.741428   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:02.241242   34792 type.go:168] "Request Body" body=""
	I1009 18:22:02.241329   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:02.241689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:02.309923   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:02.359167   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:02.361481   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:02.361513   34792 retry.go:31] will retry after 1.255051156s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:02.741014   34792 type.go:168] "Request Body" body=""
	I1009 18:22:02.741086   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:02.741469   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:02.761694   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:02.809418   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:02.811709   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:02.811735   34792 retry.go:31] will retry after 2.010356843s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:03.241312   34792 type.go:168] "Request Body" body=""
	I1009 18:22:03.241377   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:03.241665   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:03.617237   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:03.670575   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:03.670619   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:03.670643   34792 retry.go:31] will retry after 3.029315393s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:03.741894   34792 type.go:168] "Request Body" body=""
	I1009 18:22:03.741959   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:03.742307   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:03.742368   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:04.241167   34792 type.go:168] "Request Body" body=""
	I1009 18:22:04.241255   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:04.241616   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:04.741405   34792 type.go:168] "Request Body" body=""
	I1009 18:22:04.741470   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:04.741793   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:04.823125   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:04.874252   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:04.876942   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:04.876977   34792 retry.go:31] will retry after 2.337146666s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:05.241523   34792 type.go:168] "Request Body" body=""
	I1009 18:22:05.241603   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:05.241925   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:05.741876   34792 type.go:168] "Request Body" body=""
	I1009 18:22:05.741944   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:05.742306   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:06.241056   34792 type.go:168] "Request Body" body=""
	I1009 18:22:06.241120   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:06.241524   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:06.241591   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:06.701185   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:06.741960   34792 type.go:168] "Request Body" body=""
	I1009 18:22:06.742030   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:06.742348   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:06.753588   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:06.753625   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:06.753645   34792 retry.go:31] will retry after 5.067292314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:07.214286   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:07.241989   34792 type.go:168] "Request Body" body=""
	I1009 18:22:07.242085   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:07.242465   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:07.267576   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:07.267619   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:07.267638   34792 retry.go:31] will retry after 3.639407023s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:07.741211   34792 type.go:168] "Request Body" body=""
	I1009 18:22:07.741279   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:07.741611   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:08.241376   34792 type.go:168] "Request Body" body=""
	I1009 18:22:08.241468   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:08.241797   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:08.241859   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:08.741654   34792 type.go:168] "Request Body" body=""
	I1009 18:22:08.741723   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:08.742130   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:09.241911   34792 type.go:168] "Request Body" body=""
	I1009 18:22:09.241978   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:09.242356   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:09.742012   34792 type.go:168] "Request Body" body=""
	I1009 18:22:09.742100   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:09.742487   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:10.241171   34792 type.go:168] "Request Body" body=""
	I1009 18:22:10.241238   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:10.241608   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:10.741552   34792 type.go:168] "Request Body" body=""
	I1009 18:22:10.741634   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:10.741987   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:10.742077   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:10.907343   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:10.958356   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:10.960749   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:10.960774   34792 retry.go:31] will retry after 7.184910667s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:11.241202   34792 type.go:168] "Request Body" body=""
	I1009 18:22:11.241304   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:11.241646   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:11.741253   34792 type.go:168] "Request Body" body=""
	I1009 18:22:11.741393   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:11.741703   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:11.821955   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:11.870785   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:11.873227   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:11.873260   34792 retry.go:31] will retry after 9.534535371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:12.241850   34792 type.go:168] "Request Body" body=""
	I1009 18:22:12.241915   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:12.242244   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:12.741040   34792 type.go:168] "Request Body" body=""
	I1009 18:22:12.741121   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:12.741476   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:13.241242   34792 type.go:168] "Request Body" body=""
	I1009 18:22:13.241344   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:13.241681   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:13.241752   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:13.741448   34792 type.go:168] "Request Body" body=""
	I1009 18:22:13.741557   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:13.741881   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:14.241703   34792 type.go:168] "Request Body" body=""
	I1009 18:22:14.241767   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:14.242071   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:14.741971   34792 type.go:168] "Request Body" body=""
	I1009 18:22:14.742058   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:14.742415   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:15.241162   34792 type.go:168] "Request Body" body=""
	I1009 18:22:15.241227   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:15.241543   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:15.741329   34792 type.go:168] "Request Body" body=""
	I1009 18:22:15.741396   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:15.741713   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:15.741779   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:16.241461   34792 type.go:168] "Request Body" body=""
	I1009 18:22:16.241527   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:16.241841   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:16.741694   34792 type.go:168] "Request Body" body=""
	I1009 18:22:16.741756   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:16.742072   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:17.241938   34792 type.go:168] "Request Body" body=""
	I1009 18:22:17.242012   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:17.242354   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:17.741119   34792 type.go:168] "Request Body" body=""
	I1009 18:22:17.741209   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:17.741520   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:18.146014   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:18.197672   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:18.200076   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:18.200108   34792 retry.go:31] will retry after 13.416592948s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:18.241338   34792 type.go:168] "Request Body" body=""
	I1009 18:22:18.241421   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:18.241742   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:18.241815   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:18.741635   34792 type.go:168] "Request Body" body=""
	I1009 18:22:18.741716   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:18.742048   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:19.241915   34792 type.go:168] "Request Body" body=""
	I1009 18:22:19.241986   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:19.242351   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:19.741113   34792 type.go:168] "Request Body" body=""
	I1009 18:22:19.741223   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:19.741558   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:20.241266   34792 type.go:168] "Request Body" body=""
	I1009 18:22:20.241372   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:20.241689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:20.741538   34792 type.go:168] "Request Body" body=""
	I1009 18:22:20.741648   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:20.742078   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:20.742168   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:21.241982   34792 type.go:168] "Request Body" body=""
	I1009 18:22:21.242072   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:21.242428   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:21.408800   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:21.460386   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:21.460443   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:21.460465   34792 retry.go:31] will retry after 6.196258431s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:21.741894   34792 type.go:168] "Request Body" body=""
	I1009 18:22:21.741973   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:21.742340   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:22.241109   34792 type.go:168] "Request Body" body=""
	I1009 18:22:22.241216   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:22.241540   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:22.741267   34792 type.go:168] "Request Body" body=""
	I1009 18:22:22.741362   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:22.741668   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:23.241400   34792 type.go:168] "Request Body" body=""
	I1009 18:22:23.241466   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:23.241777   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:23.241839   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:23.741636   34792 type.go:168] "Request Body" body=""
	I1009 18:22:23.741720   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:23.742032   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:24.241849   34792 type.go:168] "Request Body" body=""
	I1009 18:22:24.241912   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:24.242229   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:24.740969   34792 type.go:168] "Request Body" body=""
	I1009 18:22:24.741034   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:24.741359   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:25.241097   34792 type.go:168] "Request Body" body=""
	I1009 18:22:25.241186   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:25.241506   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:25.741317   34792 type.go:168] "Request Body" body=""
	I1009 18:22:25.741384   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:25.741717   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:25.741785   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:26.241467   34792 type.go:168] "Request Body" body=""
	I1009 18:22:26.241530   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:26.241836   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:26.741641   34792 type.go:168] "Request Body" body=""
	I1009 18:22:26.741717   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:26.742054   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:27.241867   34792 type.go:168] "Request Body" body=""
	I1009 18:22:27.241935   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:27.242289   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:27.657912   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:27.709732   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:27.709776   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:27.709796   34792 retry.go:31] will retry after 21.104663041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:27.741976   34792 type.go:168] "Request Body" body=""
	I1009 18:22:27.742060   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:27.742387   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:27.742447   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:28.241206   34792 type.go:168] "Request Body" body=""
	I1009 18:22:28.241272   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:28.241641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:28.741374   34792 type.go:168] "Request Body" body=""
	I1009 18:22:28.741445   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:28.741741   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:29.241532   34792 type.go:168] "Request Body" body=""
	I1009 18:22:29.241600   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:29.241930   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:29.741720   34792 type.go:168] "Request Body" body=""
	I1009 18:22:29.741782   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:29.742115   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:30.241968   34792 type.go:168] "Request Body" body=""
	I1009 18:22:30.242038   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:30.242354   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:30.242406   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:30.741168   34792 type.go:168] "Request Body" body=""
	I1009 18:22:30.741235   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:30.741522   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:31.241253   34792 type.go:168] "Request Body" body=""
	I1009 18:22:31.241332   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:31.241693   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:31.617269   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:31.669784   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:31.669834   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:31.669851   34792 retry.go:31] will retry after 15.154475243s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:31.740998   34792 type.go:168] "Request Body" body=""
	I1009 18:22:31.741063   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:31.741420   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:32.241118   34792 type.go:168] "Request Body" body=""
	I1009 18:22:32.241207   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:32.241526   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:32.741162   34792 type.go:168] "Request Body" body=""
	I1009 18:22:32.741230   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:32.741578   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:32.741636   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:33.241206   34792 type.go:168] "Request Body" body=""
	I1009 18:22:33.241273   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:33.241600   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:33.741209   34792 type.go:168] "Request Body" body=""
	I1009 18:22:33.741274   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:33.741593   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:34.241252   34792 type.go:168] "Request Body" body=""
	I1009 18:22:34.241319   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:34.241629   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:34.741297   34792 type.go:168] "Request Body" body=""
	I1009 18:22:34.741366   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:34.741662   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:34.741714   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:35.241258   34792 type.go:168] "Request Body" body=""
	I1009 18:22:35.241319   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:35.241631   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:35.741518   34792 type.go:168] "Request Body" body=""
	I1009 18:22:35.741590   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:35.741908   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:36.241473   34792 type.go:168] "Request Body" body=""
	I1009 18:22:36.241537   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:36.241867   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:36.741507   34792 type.go:168] "Request Body" body=""
	I1009 18:22:36.741582   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:36.741900   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:36.741954   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:37.241503   34792 type.go:168] "Request Body" body=""
	I1009 18:22:37.241570   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:37.241880   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:37.741492   34792 type.go:168] "Request Body" body=""
	I1009 18:22:37.741564   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:37.741883   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:38.241508   34792 type.go:168] "Request Body" body=""
	I1009 18:22:38.241573   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:38.241878   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:38.741474   34792 type.go:168] "Request Body" body=""
	I1009 18:22:38.741571   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:38.741868   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:39.241856   34792 type.go:168] "Request Body" body=""
	I1009 18:22:39.241916   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:39.242237   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:39.242300   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:39.741898   34792 type.go:168] "Request Body" body=""
	I1009 18:22:39.741969   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:39.742303   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:40.241969   34792 type.go:168] "Request Body" body=""
	I1009 18:22:40.242062   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:40.242400   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:40.741170   34792 type.go:168] "Request Body" body=""
	I1009 18:22:40.741238   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:40.741556   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:41.241169   34792 type.go:168] "Request Body" body=""
	I1009 18:22:41.241235   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:41.241568   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:41.741187   34792 type.go:168] "Request Body" body=""
	I1009 18:22:41.741253   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:41.741589   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:41.741643   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:42.241206   34792 type.go:168] "Request Body" body=""
	I1009 18:22:42.241272   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:42.241611   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:42.741205   34792 type.go:168] "Request Body" body=""
	I1009 18:22:42.741278   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:42.741595   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:43.241190   34792 type.go:168] "Request Body" body=""
	I1009 18:22:43.241258   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:43.241582   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:43.741198   34792 type.go:168] "Request Body" body=""
	I1009 18:22:43.741263   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:43.741575   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:44.241202   34792 type.go:168] "Request Body" body=""
	I1009 18:22:44.241263   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:44.241577   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:44.241629   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:44.741212   34792 type.go:168] "Request Body" body=""
	I1009 18:22:44.741283   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:44.741598   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:45.241235   34792 type.go:168] "Request Body" body=""
	I1009 18:22:45.241301   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:45.241671   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:45.741562   34792 type.go:168] "Request Body" body=""
	I1009 18:22:45.741629   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:45.741942   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:46.241628   34792 type.go:168] "Request Body" body=""
	I1009 18:22:46.241692   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:46.241993   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:46.242063   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:46.741676   34792 type.go:168] "Request Body" body=""
	I1009 18:22:46.741745   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:46.742077   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:46.825331   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:22:46.875678   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:46.878302   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:46.878331   34792 retry.go:31] will retry after 24.753743157s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:47.241842   34792 type.go:168] "Request Body" body=""
	I1009 18:22:47.241915   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:47.242245   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:47.741025   34792 type.go:168] "Request Body" body=""
	I1009 18:22:47.741128   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:47.741463   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:48.241206   34792 type.go:168] "Request Body" body=""
	I1009 18:22:48.241284   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:48.241641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:48.741361   34792 type.go:168] "Request Body" body=""
	I1009 18:22:48.741434   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:48.741764   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:48.741814   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:48.815023   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:22:48.866903   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:22:48.866953   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:48.866975   34792 retry.go:31] will retry after 23.693621864s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:22:49.241681   34792 type.go:168] "Request Body" body=""
	I1009 18:22:49.241760   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:49.242189   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:49.741809   34792 type.go:168] "Request Body" body=""
	I1009 18:22:49.741872   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:49.742216   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:50.241969   34792 type.go:168] "Request Body" body=""
	I1009 18:22:50.242049   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:50.242406   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:50.741244   34792 type.go:168] "Request Body" body=""
	I1009 18:22:50.741312   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:50.741658   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:51.241250   34792 type.go:168] "Request Body" body=""
	I1009 18:22:51.241336   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:51.241653   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:51.241707   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:51.741250   34792 type.go:168] "Request Body" body=""
	I1009 18:22:51.741317   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:51.741731   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:52.241243   34792 type.go:168] "Request Body" body=""
	I1009 18:22:52.241341   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:52.241668   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:52.741254   34792 type.go:168] "Request Body" body=""
	I1009 18:22:52.741378   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:52.741687   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:53.241293   34792 type.go:168] "Request Body" body=""
	I1009 18:22:53.241355   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:53.241674   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:53.241725   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:53.741263   34792 type.go:168] "Request Body" body=""
	I1009 18:22:53.741330   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:53.741640   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:54.241249   34792 type.go:168] "Request Body" body=""
	I1009 18:22:54.241329   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:54.241652   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:54.741260   34792 type.go:168] "Request Body" body=""
	I1009 18:22:54.741337   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:54.741654   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:55.241278   34792 type.go:168] "Request Body" body=""
	I1009 18:22:55.241342   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:55.241675   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:55.741565   34792 type.go:168] "Request Body" body=""
	I1009 18:22:55.741632   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:55.741942   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:55.741993   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:56.241590   34792 type.go:168] "Request Body" body=""
	I1009 18:22:56.241657   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:56.241967   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:56.741618   34792 type.go:168] "Request Body" body=""
	I1009 18:22:56.741686   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:56.742001   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:57.241690   34792 type.go:168] "Request Body" body=""
	I1009 18:22:57.241747   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:57.242085   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:57.741794   34792 type.go:168] "Request Body" body=""
	I1009 18:22:57.741866   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:57.742231   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:22:57.742290   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:22:58.241896   34792 type.go:168] "Request Body" body=""
	I1009 18:22:58.241964   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:58.242341   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:58.740987   34792 type.go:168] "Request Body" body=""
	I1009 18:22:58.741057   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:58.741430   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:59.241270   34792 type.go:168] "Request Body" body=""
	I1009 18:22:59.241374   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:59.241705   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:22:59.741305   34792 type.go:168] "Request Body" body=""
	I1009 18:22:59.741378   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:22:59.741671   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:00.241318   34792 type.go:168] "Request Body" body=""
	I1009 18:23:00.241386   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:00.241730   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:00.241783   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:00.741584   34792 type.go:168] "Request Body" body=""
	I1009 18:23:00.741655   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:00.741970   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:01.241670   34792 type.go:168] "Request Body" body=""
	I1009 18:23:01.241740   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:01.242056   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:01.741725   34792 type.go:168] "Request Body" body=""
	I1009 18:23:01.741789   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:01.742109   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:02.241790   34792 type.go:168] "Request Body" body=""
	I1009 18:23:02.241853   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:02.242215   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:02.242270   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:02.741914   34792 type.go:168] "Request Body" body=""
	I1009 18:23:02.741984   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:02.742352   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:03.242008   34792 type.go:168] "Request Body" body=""
	I1009 18:23:03.242088   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:03.242455   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:03.741186   34792 type.go:168] "Request Body" body=""
	I1009 18:23:03.741250   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:03.741576   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:04.241269   34792 type.go:168] "Request Body" body=""
	I1009 18:23:04.241341   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:04.241673   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:04.741396   34792 type.go:168] "Request Body" body=""
	I1009 18:23:04.741460   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:04.741772   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:04.741828   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:05.241582   34792 type.go:168] "Request Body" body=""
	I1009 18:23:05.241646   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:05.241956   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:05.741882   34792 type.go:168] "Request Body" body=""
	I1009 18:23:05.741951   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:05.742320   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:06.241065   34792 type.go:168] "Request Body" body=""
	I1009 18:23:06.241173   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:06.241497   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:06.741232   34792 type.go:168] "Request Body" body=""
	I1009 18:23:06.741295   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:06.741640   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:07.241402   34792 type.go:168] "Request Body" body=""
	I1009 18:23:07.241487   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:07.241813   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:07.241865   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:07.741620   34792 type.go:168] "Request Body" body=""
	I1009 18:23:07.741692   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:07.742021   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:08.241855   34792 type.go:168] "Request Body" body=""
	I1009 18:23:08.241917   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:08.242226   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:08.741000   34792 type.go:168] "Request Body" body=""
	I1009 18:23:08.741070   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:08.741419   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:09.241169   34792 type.go:168] "Request Body" body=""
	I1009 18:23:09.241236   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:09.241556   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:09.741160   34792 type.go:168] "Request Body" body=""
	I1009 18:23:09.741223   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:09.741542   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:09.741611   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:10.241116   34792 type.go:168] "Request Body" body=""
	I1009 18:23:10.241215   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:10.241545   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:10.741472   34792 type.go:168] "Request Body" body=""
	I1009 18:23:10.741586   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:10.741912   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:11.241739   34792 type.go:168] "Request Body" body=""
	I1009 18:23:11.241829   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:11.242195   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:11.632645   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:23:11.684065   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:23:11.686606   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:23:11.686651   34792 retry.go:31] will retry after 43.228082894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:23:11.741902   34792 type.go:168] "Request Body" body=""
	I1009 18:23:11.741967   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:11.742335   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:11.742398   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:12.241111   34792 type.go:168] "Request Body" body=""
	I1009 18:23:12.241221   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:12.241543   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:12.560933   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:23:12.614798   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:23:12.614843   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:23:12.614940   34792 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 18:23:12.741072   34792 type.go:168] "Request Body" body=""
	I1009 18:23:12.741169   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:12.741484   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:13.241057   34792 type.go:168] "Request Body" body=""
	I1009 18:23:13.241192   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:13.241516   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:13.741110   34792 type.go:168] "Request Body" body=""
	I1009 18:23:13.741196   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:13.741493   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:14.241244   34792 type.go:168] "Request Body" body=""
	I1009 18:23:14.241314   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:14.241686   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:14.241738   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:14.741425   34792 type.go:168] "Request Body" body=""
	I1009 18:23:14.741488   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:14.741803   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:15.241603   34792 type.go:168] "Request Body" body=""
	I1009 18:23:15.241664   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:15.241993   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:15.741872   34792 type.go:168] "Request Body" body=""
	I1009 18:23:15.741942   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:15.742284   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:16.241004   34792 type.go:168] "Request Body" body=""
	I1009 18:23:16.241108   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:16.241472   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:16.741281   34792 type.go:168] "Request Body" body=""
	I1009 18:23:16.741357   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:16.741657   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:16.741710   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:17.241427   34792 type.go:168] "Request Body" body=""
	I1009 18:23:17.241489   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:17.241829   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:17.741674   34792 type.go:168] "Request Body" body=""
	I1009 18:23:17.741762   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:17.742082   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:18.241893   34792 type.go:168] "Request Body" body=""
	I1009 18:23:18.241965   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:18.242388   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:18.741175   34792 type.go:168] "Request Body" body=""
	I1009 18:23:18.741239   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:18.741553   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:19.241408   34792 type.go:168] "Request Body" body=""
	I1009 18:23:19.241483   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:19.241852   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:19.241908   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:19.741678   34792 type.go:168] "Request Body" body=""
	I1009 18:23:19.741745   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:19.742039   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:20.241909   34792 type.go:168] "Request Body" body=""
	I1009 18:23:20.241972   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:20.242406   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:20.741268   34792 type.go:168] "Request Body" body=""
	I1009 18:23:20.741334   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:20.741646   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:21.241394   34792 type.go:168] "Request Body" body=""
	I1009 18:23:21.241459   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:21.241801   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:21.741624   34792 type.go:168] "Request Body" body=""
	I1009 18:23:21.741688   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:21.741997   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:21.742063   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:22.241916   34792 type.go:168] "Request Body" body=""
	I1009 18:23:22.241978   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:22.242380   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:22.741197   34792 type.go:168] "Request Body" body=""
	I1009 18:23:22.741265   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:22.741575   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:23.241312   34792 type.go:168] "Request Body" body=""
	I1009 18:23:23.241382   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:23.241731   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:23.741463   34792 type.go:168] "Request Body" body=""
	I1009 18:23:23.741537   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:23.741848   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:24.241654   34792 type.go:168] "Request Body" body=""
	I1009 18:23:24.241717   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:24.242059   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:24.242125   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:24.741910   34792 type.go:168] "Request Body" body=""
	I1009 18:23:24.741982   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:24.742333   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:25.241063   34792 type.go:168] "Request Body" body=""
	I1009 18:23:25.241128   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:25.241505   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:25.741559   34792 type.go:168] "Request Body" body=""
	I1009 18:23:25.741626   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:25.741933   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:26.241874   34792 type.go:168] "Request Body" body=""
	I1009 18:23:26.241956   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:26.242332   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:26.242390   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:26.741061   34792 type.go:168] "Request Body" body=""
	I1009 18:23:26.741125   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:26.741525   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:27.241264   34792 type.go:168] "Request Body" body=""
	I1009 18:23:27.241334   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:27.241644   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:27.741375   34792 type.go:168] "Request Body" body=""
	I1009 18:23:27.741438   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:27.741748   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:28.241487   34792 type.go:168] "Request Body" body=""
	I1009 18:23:28.241553   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:28.241862   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:28.741699   34792 type.go:168] "Request Body" body=""
	I1009 18:23:28.741767   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:28.742072   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:28.742126   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:29.241949   34792 type.go:168] "Request Body" body=""
	I1009 18:23:29.242051   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:29.242384   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:29.741054   34792 type.go:168] "Request Body" body=""
	I1009 18:23:29.741120   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:29.741440   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:30.241213   34792 type.go:168] "Request Body" body=""
	I1009 18:23:30.241289   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:30.241596   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:30.741484   34792 type.go:168] "Request Body" body=""
	I1009 18:23:30.741560   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:30.741926   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:31.241778   34792 type.go:168] "Request Body" body=""
	I1009 18:23:31.241839   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:31.242174   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:31.242227   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:31.740976   34792 type.go:168] "Request Body" body=""
	I1009 18:23:31.741038   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:31.741384   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:32.241106   34792 type.go:168] "Request Body" body=""
	I1009 18:23:32.241215   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:32.241567   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:32.741260   34792 type.go:168] "Request Body" body=""
	I1009 18:23:32.741352   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:32.741640   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:33.241340   34792 type.go:168] "Request Body" body=""
	I1009 18:23:33.241406   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:33.241743   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:33.741456   34792 type.go:168] "Request Body" body=""
	I1009 18:23:33.741516   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:33.741808   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:33.741862   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:34.241631   34792 type.go:168] "Request Body" body=""
	I1009 18:23:34.241695   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:34.242060   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:34.741908   34792 type.go:168] "Request Body" body=""
	I1009 18:23:34.741974   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:34.742307   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:35.241044   34792 type.go:168] "Request Body" body=""
	I1009 18:23:35.241113   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:35.241458   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:35.741288   34792 type.go:168] "Request Body" body=""
	I1009 18:23:35.741356   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:35.741670   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:36.241422   34792 type.go:168] "Request Body" body=""
	I1009 18:23:36.241483   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:36.241820   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:36.241874   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:36.741640   34792 type.go:168] "Request Body" body=""
	I1009 18:23:36.741707   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:36.742009   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:37.241833   34792 type.go:168] "Request Body" body=""
	I1009 18:23:37.241903   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:37.242258   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:37.740969   34792 type.go:168] "Request Body" body=""
	I1009 18:23:37.741033   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:37.741371   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:38.241096   34792 type.go:168] "Request Body" body=""
	I1009 18:23:38.241188   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:38.241533   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:38.741254   34792 type.go:168] "Request Body" body=""
	I1009 18:23:38.741330   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:38.741616   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:38.741669   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:39.241545   34792 type.go:168] "Request Body" body=""
	I1009 18:23:39.241620   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:39.241961   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:39.741751   34792 type.go:168] "Request Body" body=""
	I1009 18:23:39.741816   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:39.742174   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:40.241991   34792 type.go:168] "Request Body" body=""
	I1009 18:23:40.242060   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:40.242448   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:40.741260   34792 type.go:168] "Request Body" body=""
	I1009 18:23:40.741326   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:40.741641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:40.741695   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:41.241401   34792 type.go:168] "Request Body" body=""
	I1009 18:23:41.241463   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:41.241842   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:41.741321   34792 type.go:168] "Request Body" body=""
	I1009 18:23:41.741396   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:41.741709   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:42.241467   34792 type.go:168] "Request Body" body=""
	I1009 18:23:42.241529   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:42.241897   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:42.741700   34792 type.go:168] "Request Body" body=""
	I1009 18:23:42.741768   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:42.742079   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:42.742160   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:43.241914   34792 type.go:168] "Request Body" body=""
	I1009 18:23:43.241973   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:43.242318   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:43.741093   34792 type.go:168] "Request Body" body=""
	I1009 18:23:43.741186   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:43.741513   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:44.241263   34792 type.go:168] "Request Body" body=""
	I1009 18:23:44.241346   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:44.241690   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:44.741269   34792 type.go:168] "Request Body" body=""
	I1009 18:23:44.741339   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:44.741649   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:45.241373   34792 type.go:168] "Request Body" body=""
	I1009 18:23:45.241435   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:45.241795   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:45.241846   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:45.741727   34792 type.go:168] "Request Body" body=""
	I1009 18:23:45.741791   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:45.742097   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:46.241926   34792 type.go:168] "Request Body" body=""
	I1009 18:23:46.241996   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:46.242356   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:46.741120   34792 type.go:168] "Request Body" body=""
	I1009 18:23:46.741209   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:46.741602   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:47.241322   34792 type.go:168] "Request Body" body=""
	I1009 18:23:47.241391   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:47.241768   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:47.741575   34792 type.go:168] "Request Body" body=""
	I1009 18:23:47.741638   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:47.741939   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:47.741988   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:48.241711   34792 type.go:168] "Request Body" body=""
	I1009 18:23:48.241771   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:48.242111   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:48.741933   34792 type.go:168] "Request Body" body=""
	I1009 18:23:48.742004   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:48.742339   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:49.241046   34792 type.go:168] "Request Body" body=""
	I1009 18:23:49.241123   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:49.241511   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:49.741243   34792 type.go:168] "Request Body" body=""
	I1009 18:23:49.741308   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:49.741638   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:50.241345   34792 type.go:168] "Request Body" body=""
	I1009 18:23:50.241408   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:50.241740   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:50.241790   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:50.741667   34792 type.go:168] "Request Body" body=""
	I1009 18:23:50.741736   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:50.742048   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:51.241420   34792 type.go:168] "Request Body" body=""
	I1009 18:23:51.241491   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:51.241828   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:51.741669   34792 type.go:168] "Request Body" body=""
	I1009 18:23:51.741742   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:51.742050   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:52.241911   34792 type.go:168] "Request Body" body=""
	I1009 18:23:52.241973   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:52.242345   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:52.242396   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:52.741096   34792 type.go:168] "Request Body" body=""
	I1009 18:23:52.741186   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:52.741495   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:53.241277   34792 type.go:168] "Request Body" body=""
	I1009 18:23:53.241348   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:53.241731   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:53.741468   34792 type.go:168] "Request Body" body=""
	I1009 18:23:53.741553   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:53.741866   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:54.241666   34792 type.go:168] "Request Body" body=""
	I1009 18:23:54.241732   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:54.242078   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:54.741932   34792 type.go:168] "Request Body" body=""
	I1009 18:23:54.741997   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:54.742359   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:54.742411   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:54.915717   34792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:23:54.969064   34792 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:23:54.969123   34792 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:23:54.969226   34792 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 18:23:54.971206   34792 out.go:179] * Enabled addons: 
	I1009 18:23:54.972204   34792 addons.go:514] duration metric: took 1m55.845883827s for enable addons: enabled=[]
	I1009 18:23:55.241550   34792 type.go:168] "Request Body" body=""
	I1009 18:23:55.241625   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:55.241961   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:55.741824   34792 type.go:168] "Request Body" body=""
	I1009 18:23:55.741904   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:55.742290   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:56.241973   34792 type.go:168] "Request Body" body=""
	I1009 18:23:56.242123   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:56.242483   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:56.741036   34792 type.go:168] "Request Body" body=""
	I1009 18:23:56.741152   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:56.741467   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:57.241090   34792 type.go:168] "Request Body" body=""
	I1009 18:23:57.241200   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:57.241560   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:57.241611   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:57.741252   34792 type.go:168] "Request Body" body=""
	I1009 18:23:57.741334   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:57.741629   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:58.241447   34792 type.go:168] "Request Body" body=""
	I1009 18:23:58.241725   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:58.242009   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:58.741244   34792 type.go:168] "Request Body" body=""
	I1009 18:23:58.741314   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:58.741649   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:23:59.241582   34792 type.go:168] "Request Body" body=""
	I1009 18:23:59.241664   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:59.241976   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:23:59.242029   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:23:59.741645   34792 type.go:168] "Request Body" body=""
	I1009 18:23:59.741711   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:23:59.742016   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:00.241679   34792 type.go:168] "Request Body" body=""
	I1009 18:24:00.241745   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:00.242104   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:00.741941   34792 type.go:168] "Request Body" body=""
	I1009 18:24:00.742015   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:00.742375   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:01.240979   34792 type.go:168] "Request Body" body=""
	I1009 18:24:01.241079   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:01.241446   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:01.741104   34792 type.go:168] "Request Body" body=""
	I1009 18:24:01.741198   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:01.741536   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:01.741587   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:02.241191   34792 type.go:168] "Request Body" body=""
	I1009 18:24:02.241259   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:02.241560   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:02.741155   34792 type.go:168] "Request Body" body=""
	I1009 18:24:02.741230   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:02.741560   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:03.241230   34792 type.go:168] "Request Body" body=""
	I1009 18:24:03.241291   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:03.241606   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:03.741234   34792 type.go:168] "Request Body" body=""
	I1009 18:24:03.741320   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:03.741610   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:03.741659   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:04.241477   34792 type.go:168] "Request Body" body=""
	I1009 18:24:04.241610   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:04.241994   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:04.741666   34792 type.go:168] "Request Body" body=""
	I1009 18:24:04.741733   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:04.742049   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:05.241727   34792 type.go:168] "Request Body" body=""
	I1009 18:24:05.241807   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:05.242113   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:05.741949   34792 type.go:168] "Request Body" body=""
	I1009 18:24:05.742014   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:05.742361   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:05.742412   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:06.240966   34792 type.go:168] "Request Body" body=""
	I1009 18:24:06.241087   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:06.241438   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:06.741043   34792 type.go:168] "Request Body" body=""
	I1009 18:24:06.741125   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:06.741482   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:07.241180   34792 type.go:168] "Request Body" body=""
	I1009 18:24:07.241242   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:07.241557   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:07.741167   34792 type.go:168] "Request Body" body=""
	I1009 18:24:07.741259   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:07.741613   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:08.241236   34792 type.go:168] "Request Body" body=""
	I1009 18:24:08.241302   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:08.241607   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:08.241657   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:08.741270   34792 type.go:168] "Request Body" body=""
	I1009 18:24:08.741337   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:08.741689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:09.241656   34792 type.go:168] "Request Body" body=""
	I1009 18:24:09.241721   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:09.242060   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:09.741758   34792 type.go:168] "Request Body" body=""
	I1009 18:24:09.741832   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:09.742204   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:10.241854   34792 type.go:168] "Request Body" body=""
	I1009 18:24:10.241948   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:10.242297   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:10.242356   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:10.740989   34792 type.go:168] "Request Body" body=""
	I1009 18:24:10.741064   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:10.741405   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:11.242008   34792 type.go:168] "Request Body" body=""
	I1009 18:24:11.242096   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:11.242414   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:11.741019   34792 type.go:168] "Request Body" body=""
	I1009 18:24:11.741090   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:11.741443   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:12.241051   34792 type.go:168] "Request Body" body=""
	I1009 18:24:12.241127   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:12.241488   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:12.741129   34792 type.go:168] "Request Body" body=""
	I1009 18:24:12.741226   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:12.741564   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:12.741614   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:13.241115   34792 type.go:168] "Request Body" body=""
	I1009 18:24:13.241208   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:13.241540   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:13.741171   34792 type.go:168] "Request Body" body=""
	I1009 18:24:13.741235   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:13.741549   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:14.241221   34792 type.go:168] "Request Body" body=""
	I1009 18:24:14.241289   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:14.241613   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:14.741228   34792 type.go:168] "Request Body" body=""
	I1009 18:24:14.741294   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:14.741619   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:14.741670   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:15.241203   34792 type.go:168] "Request Body" body=""
	I1009 18:24:15.241266   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:15.241587   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:15.741480   34792 type.go:168] "Request Body" body=""
	I1009 18:24:15.741544   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:15.741911   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:16.241491   34792 type.go:168] "Request Body" body=""
	I1009 18:24:16.241558   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:16.241870   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:16.741517   34792 type.go:168] "Request Body" body=""
	I1009 18:24:16.741585   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:16.741911   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:16.741963   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:17.241588   34792 type.go:168] "Request Body" body=""
	I1009 18:24:17.241650   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:17.241989   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:17.741644   34792 type.go:168] "Request Body" body=""
	I1009 18:24:17.741710   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:17.742011   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:18.241688   34792 type.go:168] "Request Body" body=""
	I1009 18:24:18.241755   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:18.242125   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:18.741790   34792 type.go:168] "Request Body" body=""
	I1009 18:24:18.741854   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:18.742223   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:18.742290   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:19.242039   34792 type.go:168] "Request Body" body=""
	I1009 18:24:19.242109   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:19.242472   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:19.741076   34792 type.go:168] "Request Body" body=""
	I1009 18:24:19.741162   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:19.741541   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:20.241117   34792 type.go:168] "Request Body" body=""
	I1009 18:24:20.241204   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:20.241525   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:20.741486   34792 type.go:168] "Request Body" body=""
	I1009 18:24:20.741556   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:20.741868   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:21.241426   34792 type.go:168] "Request Body" body=""
	I1009 18:24:21.241498   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:21.241806   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:21.241862   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:21.741431   34792 type.go:168] "Request Body" body=""
	I1009 18:24:21.741537   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:21.741868   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:22.241461   34792 type.go:168] "Request Body" body=""
	I1009 18:24:22.241535   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:22.241849   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:22.741438   34792 type.go:168] "Request Body" body=""
	I1009 18:24:22.741501   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:22.741846   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:23.241408   34792 type.go:168] "Request Body" body=""
	I1009 18:24:23.241477   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:23.241783   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:23.741400   34792 type.go:168] "Request Body" body=""
	I1009 18:24:23.741470   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:23.741789   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:23.741845   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:24.241359   34792 type.go:168] "Request Body" body=""
	I1009 18:24:24.241431   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:24.241755   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:24.741348   34792 type.go:168] "Request Body" body=""
	I1009 18:24:24.741408   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:24.741733   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:25.241293   34792 type.go:168] "Request Body" body=""
	I1009 18:24:25.241374   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:25.241694   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:25.741621   34792 type.go:168] "Request Body" body=""
	I1009 18:24:25.741682   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:25.742037   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:25.742088   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:26.241707   34792 type.go:168] "Request Body" body=""
	I1009 18:24:26.241774   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:26.242098   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:26.741808   34792 type.go:168] "Request Body" body=""
	I1009 18:24:26.741871   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:26.742236   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:27.241893   34792 type.go:168] "Request Body" body=""
	I1009 18:24:27.241957   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:27.242307   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:27.741971   34792 type.go:168] "Request Body" body=""
	I1009 18:24:27.742039   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:27.742363   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:27.742412   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:28.240944   34792 type.go:168] "Request Body" body=""
	I1009 18:24:28.241012   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:28.241383   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:28.740967   34792 type.go:168] "Request Body" body=""
	I1009 18:24:28.741047   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:28.741411   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:29.241219   34792 type.go:168] "Request Body" body=""
	I1009 18:24:29.241290   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:29.241653   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:29.741274   34792 type.go:168] "Request Body" body=""
	I1009 18:24:29.741345   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:29.741655   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:30.241249   34792 type.go:168] "Request Body" body=""
	I1009 18:24:30.241326   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:30.241636   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:30.241689   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:30.741565   34792 type.go:168] "Request Body" body=""
	I1009 18:24:30.741637   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:30.741952   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:31.241609   34792 type.go:168] "Request Body" body=""
	I1009 18:24:31.241669   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:31.242013   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:31.741661   34792 type.go:168] "Request Body" body=""
	I1009 18:24:31.741727   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:31.742040   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:32.241675   34792 type.go:168] "Request Body" body=""
	I1009 18:24:32.241739   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:32.242047   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:32.242100   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:32.741353   34792 type.go:168] "Request Body" body=""
	I1009 18:24:32.741425   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:32.741746   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:33.241341   34792 type.go:168] "Request Body" body=""
	I1009 18:24:33.241401   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:33.241718   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:33.741321   34792 type.go:168] "Request Body" body=""
	I1009 18:24:33.741388   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:33.741692   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:34.241262   34792 type.go:168] "Request Body" body=""
	I1009 18:24:34.241326   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:34.241641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:34.741266   34792 type.go:168] "Request Body" body=""
	I1009 18:24:34.741339   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:34.741686   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:34.741740   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:35.241256   34792 type.go:168] "Request Body" body=""
	I1009 18:24:35.241332   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:35.241644   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:35.741557   34792 type.go:168] "Request Body" body=""
	I1009 18:24:35.741623   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:35.741960   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:36.241631   34792 type.go:168] "Request Body" body=""
	I1009 18:24:36.241698   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:36.242094   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:36.741738   34792 type.go:168] "Request Body" body=""
	I1009 18:24:36.741810   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:36.742164   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:36.742232   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:37.241811   34792 type.go:168] "Request Body" body=""
	I1009 18:24:37.241879   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:37.242219   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:37.741906   34792 type.go:168] "Request Body" body=""
	I1009 18:24:37.741972   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:37.742360   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:38.241974   34792 type.go:168] "Request Body" body=""
	I1009 18:24:38.242032   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:38.242406   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:38.740970   34792 type.go:168] "Request Body" body=""
	I1009 18:24:38.741038   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:38.741400   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:39.241238   34792 type.go:168] "Request Body" body=""
	I1009 18:24:39.241302   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:39.241642   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:39.241695   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:39.741304   34792 type.go:168] "Request Body" body=""
	I1009 18:24:39.741370   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:39.741689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:40.241283   34792 type.go:168] "Request Body" body=""
	I1009 18:24:40.241349   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:40.241689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:40.741596   34792 type.go:168] "Request Body" body=""
	I1009 18:24:40.741665   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:40.741992   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:41.241775   34792 type.go:168] "Request Body" body=""
	I1009 18:24:41.241853   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:41.242210   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:41.242282   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:41.741904   34792 type.go:168] "Request Body" body=""
	I1009 18:24:41.741970   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:41.742352   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:42.240959   34792 type.go:168] "Request Body" body=""
	I1009 18:24:42.241085   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:42.241411   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:42.741000   34792 type.go:168] "Request Body" body=""
	I1009 18:24:42.741063   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:42.741398   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:43.242037   34792 type.go:168] "Request Body" body=""
	I1009 18:24:43.242129   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:43.242476   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:43.242528   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:43.741058   34792 type.go:168] "Request Body" body=""
	I1009 18:24:43.741124   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:43.741463   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:44.241058   34792 type.go:168] "Request Body" body=""
	I1009 18:24:44.241159   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:44.241499   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:44.741068   34792 type.go:168] "Request Body" body=""
	I1009 18:24:44.741159   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:44.741472   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:45.241073   34792 type.go:168] "Request Body" body=""
	I1009 18:24:45.241155   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:45.241482   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:45.741464   34792 type.go:168] "Request Body" body=""
	I1009 18:24:45.741533   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:45.741834   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:45.741888   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:46.241484   34792 type.go:168] "Request Body" body=""
	I1009 18:24:46.241552   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:46.241885   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:46.741462   34792 type.go:168] "Request Body" body=""
	I1009 18:24:46.741538   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:46.741838   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:47.241422   34792 type.go:168] "Request Body" body=""
	I1009 18:24:47.241483   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:47.241808   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:47.741360   34792 type.go:168] "Request Body" body=""
	I1009 18:24:47.741425   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:47.741734   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:48.241415   34792 type.go:168] "Request Body" body=""
	I1009 18:24:48.241480   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:48.241802   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:48.241867   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:48.741335   34792 type.go:168] "Request Body" body=""
	I1009 18:24:48.741399   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:48.741718   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:49.241753   34792 type.go:168] "Request Body" body=""
	I1009 18:24:49.241820   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:49.242187   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:49.741848   34792 type.go:168] "Request Body" body=""
	I1009 18:24:49.741914   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:49.742284   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:50.242049   34792 type.go:168] "Request Body" body=""
	I1009 18:24:50.242115   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:50.242449   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:50.242500   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:50.741086   34792 type.go:168] "Request Body" body=""
	I1009 18:24:50.741198   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:50.741527   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:51.241098   34792 type.go:168] "Request Body" body=""
	I1009 18:24:51.241186   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:51.241495   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:51.741082   34792 type.go:168] "Request Body" body=""
	I1009 18:24:51.741183   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:51.741522   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:52.241121   34792 type.go:168] "Request Body" body=""
	I1009 18:24:52.241212   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:52.241508   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:52.741094   34792 type.go:168] "Request Body" body=""
	I1009 18:24:52.741203   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:52.741514   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:52.741572   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:53.241090   34792 type.go:168] "Request Body" body=""
	I1009 18:24:53.241183   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:53.241580   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:53.741218   34792 type.go:168] "Request Body" body=""
	I1009 18:24:53.741300   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:53.741630   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:54.241270   34792 type.go:168] "Request Body" body=""
	I1009 18:24:54.241352   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:54.241658   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:54.741241   34792 type.go:168] "Request Body" body=""
	I1009 18:24:54.741321   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:54.741636   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:54.741687   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:55.241234   34792 type.go:168] "Request Body" body=""
	I1009 18:24:55.241306   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:55.241626   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:55.741410   34792 type.go:168] "Request Body" body=""
	I1009 18:24:55.741479   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:55.741852   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:56.241427   34792 type.go:168] "Request Body" body=""
	I1009 18:24:56.241491   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:56.241834   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:56.741423   34792 type.go:168] "Request Body" body=""
	I1009 18:24:56.741492   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:56.741854   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:56.741921   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:57.241419   34792 type.go:168] "Request Body" body=""
	I1009 18:24:57.241484   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:57.241784   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:57.741337   34792 type.go:168] "Request Body" body=""
	I1009 18:24:57.741402   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:57.741768   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:58.241353   34792 type.go:168] "Request Body" body=""
	I1009 18:24:58.241420   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:58.241723   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:58.741285   34792 type.go:168] "Request Body" body=""
	I1009 18:24:58.741356   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:58.741698   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:24:59.241536   34792 type.go:168] "Request Body" body=""
	I1009 18:24:59.241601   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:59.241906   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:24:59.241970   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:24:59.741466   34792 type.go:168] "Request Body" body=""
	I1009 18:24:59.741528   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:24:59.741866   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:00.241421   34792 type.go:168] "Request Body" body=""
	I1009 18:25:00.241487   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:00.241800   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:00.741667   34792 type.go:168] "Request Body" body=""
	I1009 18:25:00.741748   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:00.742076   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:01.241775   34792 type.go:168] "Request Body" body=""
	I1009 18:25:01.241841   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:01.242226   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:01.242284   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:01.741879   34792 type.go:168] "Request Body" body=""
	I1009 18:25:01.741957   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:01.742330   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:02.241978   34792 type.go:168] "Request Body" body=""
	I1009 18:25:02.242041   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:02.242423   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:02.741029   34792 type.go:168] "Request Body" body=""
	I1009 18:25:02.741115   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:02.741462   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:03.241086   34792 type.go:168] "Request Body" body=""
	I1009 18:25:03.241179   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:03.241501   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:03.741018   34792 type.go:168] "Request Body" body=""
	I1009 18:25:03.741114   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:03.741476   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:03.741528   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:04.241053   34792 type.go:168] "Request Body" body=""
	I1009 18:25:04.241116   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:04.241452   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:04.741007   34792 type.go:168] "Request Body" body=""
	I1009 18:25:04.741083   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:04.741445   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:05.241037   34792 type.go:168] "Request Body" body=""
	I1009 18:25:05.241100   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:05.241427   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:05.741247   34792 type.go:168] "Request Body" body=""
	I1009 18:25:05.741321   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:05.741697   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:05.741771   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:06.241254   34792 type.go:168] "Request Body" body=""
	I1009 18:25:06.241327   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:06.241639   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:06.741286   34792 type.go:168] "Request Body" body=""
	I1009 18:25:06.741366   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:06.741735   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:07.241253   34792 type.go:168] "Request Body" body=""
	I1009 18:25:07.241322   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:07.241625   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:07.741217   34792 type.go:168] "Request Body" body=""
	I1009 18:25:07.741279   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:07.741640   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:08.241244   34792 type.go:168] "Request Body" body=""
	I1009 18:25:08.241315   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:08.241647   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:08.241711   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:08.741241   34792 type.go:168] "Request Body" body=""
	I1009 18:25:08.741304   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:08.741686   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:09.241716   34792 type.go:168] "Request Body" body=""
	I1009 18:25:09.241782   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:09.242124   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:09.741814   34792 type.go:168] "Request Body" body=""
	I1009 18:25:09.741880   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:09.742241   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:10.241918   34792 type.go:168] "Request Body" body=""
	I1009 18:25:10.241983   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:10.242339   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:10.242405   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:10.741070   34792 type.go:168] "Request Body" body=""
	I1009 18:25:10.741194   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:10.741554   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:11.241213   34792 type.go:168] "Request Body" body=""
	I1009 18:25:11.241281   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:11.241588   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:11.741236   34792 type.go:168] "Request Body" body=""
	I1009 18:25:11.741322   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:11.741656   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:12.241283   34792 type.go:168] "Request Body" body=""
	I1009 18:25:12.241345   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:12.241648   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:12.741253   34792 type.go:168] "Request Body" body=""
	I1009 18:25:12.741341   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:12.741670   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:12.741727   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:13.241274   34792 type.go:168] "Request Body" body=""
	I1009 18:25:13.241352   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:13.241660   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:13.741258   34792 type.go:168] "Request Body" body=""
	I1009 18:25:13.741346   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:13.741679   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:14.241260   34792 type.go:168] "Request Body" body=""
	I1009 18:25:14.241333   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:14.241686   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:14.741277   34792 type.go:168] "Request Body" body=""
	I1009 18:25:14.741354   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:14.741682   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:15.241247   34792 type.go:168] "Request Body" body=""
	I1009 18:25:15.241309   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:15.241612   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:15.241669   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:15.741488   34792 type.go:168] "Request Body" body=""
	I1009 18:25:15.741552   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:15.741890   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:16.241468   34792 type.go:168] "Request Body" body=""
	I1009 18:25:16.241537   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:16.241842   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:16.741415   34792 type.go:168] "Request Body" body=""
	I1009 18:25:16.741480   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:16.741850   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:17.241442   34792 type.go:168] "Request Body" body=""
	I1009 18:25:17.241504   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:17.241800   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:17.241861   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:17.741344   34792 type.go:168] "Request Body" body=""
	I1009 18:25:17.741411   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:17.741764   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:18.241362   34792 type.go:168] "Request Body" body=""
	I1009 18:25:18.241432   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:18.241786   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:18.741325   34792 type.go:168] "Request Body" body=""
	I1009 18:25:18.741390   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:18.741723   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:19.241633   34792 type.go:168] "Request Body" body=""
	I1009 18:25:19.241702   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:19.242011   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:19.242081   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:19.741669   34792 type.go:168] "Request Body" body=""
	I1009 18:25:19.741733   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:19.742064   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:20.241763   34792 type.go:168] "Request Body" body=""
	I1009 18:25:20.241826   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:20.242186   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:20.742053   34792 type.go:168] "Request Body" body=""
	I1009 18:25:20.742131   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:20.742513   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:21.241071   34792 type.go:168] "Request Body" body=""
	I1009 18:25:21.241171   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:21.241504   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:21.741088   34792 type.go:168] "Request Body" body=""
	I1009 18:25:21.741207   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:21.741536   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:21.741594   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:22.241126   34792 type.go:168] "Request Body" body=""
	I1009 18:25:22.241221   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:22.241545   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:22.741131   34792 type.go:168] "Request Body" body=""
	I1009 18:25:22.741233   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:22.741588   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:23.241178   34792 type.go:168] "Request Body" body=""
	I1009 18:25:23.241242   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:23.241568   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:23.741162   34792 type.go:168] "Request Body" body=""
	I1009 18:25:23.741242   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:23.741577   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:23.741627   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:24.241178   34792 type.go:168] "Request Body" body=""
	I1009 18:25:24.241246   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:24.241578   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:24.741188   34792 type.go:168] "Request Body" body=""
	I1009 18:25:24.741295   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:24.741619   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:25.241208   34792 type.go:168] "Request Body" body=""
	I1009 18:25:25.241275   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:25.241641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:25.741538   34792 type.go:168] "Request Body" body=""
	I1009 18:25:25.741597   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:25.741905   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:25.741979   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:26.241464   34792 type.go:168] "Request Body" body=""
	I1009 18:25:26.241527   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:26.241835   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:26.741401   34792 type.go:168] "Request Body" body=""
	I1009 18:25:26.741467   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:26.741780   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:27.241351   34792 type.go:168] "Request Body" body=""
	I1009 18:25:27.241416   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:27.241723   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:27.741308   34792 type.go:168] "Request Body" body=""
	I1009 18:25:27.741383   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:27.741695   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:28.241262   34792 type.go:168] "Request Body" body=""
	I1009 18:25:28.241331   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:28.241634   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:28.241696   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:28.741253   34792 type.go:168] "Request Body" body=""
	I1009 18:25:28.741315   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:28.741626   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:29.241574   34792 type.go:168] "Request Body" body=""
	I1009 18:25:29.241643   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:29.241986   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:29.741657   34792 type.go:168] "Request Body" body=""
	I1009 18:25:29.741719   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:29.742063   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:30.241739   34792 type.go:168] "Request Body" body=""
	I1009 18:25:30.241804   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:30.242168   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:30.242230   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:30.741968   34792 type.go:168] "Request Body" body=""
	I1009 18:25:30.742100   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:30.742470   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:31.241076   34792 type.go:168] "Request Body" body=""
	I1009 18:25:31.241171   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:31.241532   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:31.741177   34792 type.go:168] "Request Body" body=""
	I1009 18:25:31.741282   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:31.741624   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:32.241262   34792 type.go:168] "Request Body" body=""
	I1009 18:25:32.241340   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:32.241670   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:32.741275   34792 type.go:168] "Request Body" body=""
	I1009 18:25:32.741360   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:32.741742   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:32.741796   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:33.241329   34792 type.go:168] "Request Body" body=""
	I1009 18:25:33.241396   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:33.241697   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:33.741289   34792 type.go:168] "Request Body" body=""
	I1009 18:25:33.741384   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:33.741759   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:34.241368   34792 type.go:168] "Request Body" body=""
	I1009 18:25:34.241439   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:34.241760   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:34.741351   34792 type.go:168] "Request Body" body=""
	I1009 18:25:34.741428   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:34.741798   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:34.741864   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:35.241399   34792 type.go:168] "Request Body" body=""
	I1009 18:25:35.241491   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:35.241838   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:35.741772   34792 type.go:168] "Request Body" body=""
	I1009 18:25:35.741836   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:35.742224   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:36.242003   34792 type.go:168] "Request Body" body=""
	I1009 18:25:36.242076   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:36.242435   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:36.741028   34792 type.go:168] "Request Body" body=""
	I1009 18:25:36.741097   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:36.741464   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:37.241121   34792 type.go:168] "Request Body" body=""
	I1009 18:25:37.241212   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:37.241551   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:37.241620   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:37.741109   34792 type.go:168] "Request Body" body=""
	I1009 18:25:37.741219   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:37.741567   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:38.241177   34792 type.go:168] "Request Body" body=""
	I1009 18:25:38.241246   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:38.241629   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:38.741262   34792 type.go:168] "Request Body" body=""
	I1009 18:25:38.741325   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:38.741654   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:39.241652   34792 type.go:168] "Request Body" body=""
	I1009 18:25:39.241726   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:39.242067   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:39.242125   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:39.741736   34792 type.go:168] "Request Body" body=""
	I1009 18:25:39.741806   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:39.742215   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:40.241891   34792 type.go:168] "Request Body" body=""
	I1009 18:25:40.241956   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:40.242334   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:40.741050   34792 type.go:168] "Request Body" body=""
	I1009 18:25:40.741121   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:40.741479   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:41.241091   34792 type.go:168] "Request Body" body=""
	I1009 18:25:41.241192   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:41.241525   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:41.741118   34792 type.go:168] "Request Body" body=""
	I1009 18:25:41.741208   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:41.741569   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:41.741626   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:42.241220   34792 type.go:168] "Request Body" body=""
	I1009 18:25:42.241296   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:42.241609   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:42.741251   34792 type.go:168] "Request Body" body=""
	I1009 18:25:42.741318   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:42.741643   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:43.241341   34792 type.go:168] "Request Body" body=""
	I1009 18:25:43.241412   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:43.241736   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:43.741353   34792 type.go:168] "Request Body" body=""
	I1009 18:25:43.741418   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:43.741732   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:43.741785   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:44.241361   34792 type.go:168] "Request Body" body=""
	I1009 18:25:44.241434   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:44.241757   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:44.741332   34792 type.go:168] "Request Body" body=""
	I1009 18:25:44.741401   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:44.741760   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:45.241363   34792 type.go:168] "Request Body" body=""
	I1009 18:25:45.241438   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:45.241819   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:45.741752   34792 type.go:168] "Request Body" body=""
	I1009 18:25:45.741826   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:45.742224   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:45.742282   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:46.241931   34792 type.go:168] "Request Body" body=""
	I1009 18:25:46.242008   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:46.242395   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:46.740984   34792 type.go:168] "Request Body" body=""
	I1009 18:25:46.741081   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:46.741473   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:47.241027   34792 type.go:168] "Request Body" body=""
	I1009 18:25:47.241148   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:47.241536   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:47.741035   34792 type.go:168] "Request Body" body=""
	I1009 18:25:47.741101   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:47.741554   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:48.241082   34792 type.go:168] "Request Body" body=""
	I1009 18:25:48.241179   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:48.241496   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:48.241548   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:48.741082   34792 type.go:168] "Request Body" body=""
	I1009 18:25:48.741203   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:48.741562   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:49.241540   34792 type.go:168] "Request Body" body=""
	I1009 18:25:49.241609   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:49.241992   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:49.741668   34792 type.go:168] "Request Body" body=""
	I1009 18:25:49.741737   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:49.742062   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:50.241713   34792 type.go:168] "Request Body" body=""
	I1009 18:25:50.241779   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:50.242089   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:50.242165   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:50.741969   34792 type.go:168] "Request Body" body=""
	I1009 18:25:50.742080   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:50.742425   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:51.241055   34792 type.go:168] "Request Body" body=""
	I1009 18:25:51.241121   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:51.241485   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:51.741082   34792 type.go:168] "Request Body" body=""
	I1009 18:25:51.741170   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:51.741493   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:52.241115   34792 type.go:168] "Request Body" body=""
	I1009 18:25:52.241209   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:52.241541   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:52.741234   34792 type.go:168] "Request Body" body=""
	I1009 18:25:52.741307   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:52.741661   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:52.741713   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:53.241239   34792 type.go:168] "Request Body" body=""
	I1009 18:25:53.241326   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:53.241653   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:53.741250   34792 type.go:168] "Request Body" body=""
	I1009 18:25:53.741330   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:53.741655   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:54.241252   34792 type.go:168] "Request Body" body=""
	I1009 18:25:54.241357   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:54.241717   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:54.741298   34792 type.go:168] "Request Body" body=""
	I1009 18:25:54.741362   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:54.741680   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:54.741732   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:55.241249   34792 type.go:168] "Request Body" body=""
	I1009 18:25:55.241310   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:55.241707   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:55.741639   34792 type.go:168] "Request Body" body=""
	I1009 18:25:55.741703   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:55.742036   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:56.241666   34792 type.go:168] "Request Body" body=""
	I1009 18:25:56.241729   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:56.242065   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:56.741838   34792 type.go:168] "Request Body" body=""
	I1009 18:25:56.741901   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:56.742249   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:56.742310   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:57.241936   34792 type.go:168] "Request Body" body=""
	I1009 18:25:57.242047   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:57.242403   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:57.741073   34792 type.go:168] "Request Body" body=""
	I1009 18:25:57.741156   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:57.741453   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:58.241102   34792 type.go:168] "Request Body" body=""
	I1009 18:25:58.241189   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:58.241532   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:58.741625   34792 type.go:168] "Request Body" body=""
	I1009 18:25:58.741731   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:58.742069   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:25:59.241918   34792 type.go:168] "Request Body" body=""
	I1009 18:25:59.242002   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:59.242382   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:25:59.242433   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:25:59.741586   34792 type.go:168] "Request Body" body=""
	I1009 18:25:59.741680   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:25:59.742047   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:00.241712   34792 type.go:168] "Request Body" body=""
	I1009 18:26:00.241778   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:00.242123   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:00.741944   34792 type.go:168] "Request Body" body=""
	I1009 18:26:00.742006   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:00.742335   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:01.241998   34792 type.go:168] "Request Body" body=""
	I1009 18:26:01.242063   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:01.242409   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:01.242463   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:01.740980   34792 type.go:168] "Request Body" body=""
	I1009 18:26:01.741043   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:01.741380   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:02.240968   34792 type.go:168] "Request Body" body=""
	I1009 18:26:02.241034   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:02.241387   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:02.740965   34792 type.go:168] "Request Body" body=""
	I1009 18:26:02.741036   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:02.741361   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:03.241979   34792 type.go:168] "Request Body" body=""
	I1009 18:26:03.242041   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:03.242370   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:03.740968   34792 type.go:168] "Request Body" body=""
	I1009 18:26:03.741033   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:03.741362   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:03.741412   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:04.242040   34792 type.go:168] "Request Body" body=""
	I1009 18:26:04.242108   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:04.242468   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:04.741070   34792 type.go:168] "Request Body" body=""
	I1009 18:26:04.741158   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:04.741484   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:05.241044   34792 type.go:168] "Request Body" body=""
	I1009 18:26:05.241107   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:05.241461   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:05.741242   34792 type.go:168] "Request Body" body=""
	I1009 18:26:05.741305   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:05.741627   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:05.741678   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:06.241201   34792 type.go:168] "Request Body" body=""
	I1009 18:26:06.241271   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:06.241594   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:06.741216   34792 type.go:168] "Request Body" body=""
	I1009 18:26:06.741302   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:06.741638   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:07.241228   34792 type.go:168] "Request Body" body=""
	I1009 18:26:07.241309   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:07.241642   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:07.741295   34792 type.go:168] "Request Body" body=""
	I1009 18:26:07.741364   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:07.741662   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:07.741715   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:08.241237   34792 type.go:168] "Request Body" body=""
	I1009 18:26:08.241302   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:08.241600   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:08.741196   34792 type.go:168] "Request Body" body=""
	I1009 18:26:08.741257   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:08.741600   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:09.241564   34792 type.go:168] "Request Body" body=""
	I1009 18:26:09.241629   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:09.241949   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:09.741615   34792 type.go:168] "Request Body" body=""
	I1009 18:26:09.741680   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:09.741985   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:09.742040   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:10.241636   34792 type.go:168] "Request Body" body=""
	I1009 18:26:10.241706   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:10.242002   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:10.741894   34792 type.go:168] "Request Body" body=""
	I1009 18:26:10.741959   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:10.742285   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:11.241928   34792 type.go:168] "Request Body" body=""
	I1009 18:26:11.241997   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:11.242350   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:11.742032   34792 type.go:168] "Request Body" body=""
	I1009 18:26:11.742100   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:11.742451   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:11.742508   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:12.241054   34792 type.go:168] "Request Body" body=""
	I1009 18:26:12.241123   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:12.241536   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:12.741176   34792 type.go:168] "Request Body" body=""
	I1009 18:26:12.741242   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:12.741599   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:13.241179   34792 type.go:168] "Request Body" body=""
	I1009 18:26:13.241237   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:13.241552   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:13.741164   34792 type.go:168] "Request Body" body=""
	I1009 18:26:13.741229   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:13.741597   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:14.241174   34792 type.go:168] "Request Body" body=""
	I1009 18:26:14.241246   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:14.241576   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:14.241632   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:14.741184   34792 type.go:168] "Request Body" body=""
	I1009 18:26:14.741250   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:14.741553   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:15.241116   34792 type.go:168] "Request Body" body=""
	I1009 18:26:15.241224   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:15.241537   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:15.741317   34792 type.go:168] "Request Body" body=""
	I1009 18:26:15.741389   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:15.741689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:16.241241   34792 type.go:168] "Request Body" body=""
	I1009 18:26:16.241305   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:16.241632   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:16.241683   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:16.741260   34792 type.go:168] "Request Body" body=""
	I1009 18:26:16.741325   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:16.741630   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:17.241224   34792 type.go:168] "Request Body" body=""
	I1009 18:26:17.241286   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:17.241599   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:17.741225   34792 type.go:168] "Request Body" body=""
	I1009 18:26:17.741291   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:17.741594   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:18.241198   34792 type.go:168] "Request Body" body=""
	I1009 18:26:18.241264   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:18.241577   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:18.741185   34792 type.go:168] "Request Body" body=""
	I1009 18:26:18.741257   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:18.741577   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:18.741626   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:19.241353   34792 type.go:168] "Request Body" body=""
	I1009 18:26:19.241426   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:19.241744   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:19.741299   34792 type.go:168] "Request Body" body=""
	I1009 18:26:19.741364   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:19.741663   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:20.241246   34792 type.go:168] "Request Body" body=""
	I1009 18:26:20.241316   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:20.241629   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:20.741541   34792 type.go:168] "Request Body" body=""
	I1009 18:26:20.741607   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:20.741914   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:20.741966   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:21.241518   34792 type.go:168] "Request Body" body=""
	I1009 18:26:21.241583   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:21.241885   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:21.741448   34792 type.go:168] "Request Body" body=""
	I1009 18:26:21.741515   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:21.741816   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:22.241407   34792 type.go:168] "Request Body" body=""
	I1009 18:26:22.241471   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:22.241770   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:22.741331   34792 type.go:168] "Request Body" body=""
	I1009 18:26:22.741400   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:22.741698   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:23.241258   34792 type.go:168] "Request Body" body=""
	I1009 18:26:23.241325   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:23.241638   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:23.241693   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:23.741220   34792 type.go:168] "Request Body" body=""
	I1009 18:26:23.741300   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:23.741602   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:24.241221   34792 type.go:168] "Request Body" body=""
	I1009 18:26:24.241295   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:24.241598   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:24.741133   34792 type.go:168] "Request Body" body=""
	I1009 18:26:24.741216   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:24.741539   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:25.241114   34792 type.go:168] "Request Body" body=""
	I1009 18:26:25.241213   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:25.241546   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:25.741511   34792 type.go:168] "Request Body" body=""
	I1009 18:26:25.741576   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:25.741865   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:25.741922   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:26.241516   34792 type.go:168] "Request Body" body=""
	I1009 18:26:26.241579   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:26.241882   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:26.741449   34792 type.go:168] "Request Body" body=""
	I1009 18:26:26.741511   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:26.741816   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:27.241391   34792 type.go:168] "Request Body" body=""
	I1009 18:26:27.241460   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:27.241802   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:27.741394   34792 type.go:168] "Request Body" body=""
	I1009 18:26:27.741461   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:27.741756   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:28.241317   34792 type.go:168] "Request Body" body=""
	I1009 18:26:28.241388   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:28.241721   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:28.241777   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:28.741288   34792 type.go:168] "Request Body" body=""
	I1009 18:26:28.741355   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:28.741648   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:29.241543   34792 type.go:168] "Request Body" body=""
	I1009 18:26:29.241610   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:29.241914   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:29.741477   34792 type.go:168] "Request Body" body=""
	I1009 18:26:29.741542   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:29.741838   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:30.241416   34792 type.go:168] "Request Body" body=""
	I1009 18:26:30.241476   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:30.241809   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:30.241861   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:30.741676   34792 type.go:168] "Request Body" body=""
	I1009 18:26:30.741745   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:30.742049   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:31.241791   34792 type.go:168] "Request Body" body=""
	I1009 18:26:31.241858   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:31.242183   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:31.741839   34792 type.go:168] "Request Body" body=""
	I1009 18:26:31.741908   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:31.742213   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:32.241895   34792 type.go:168] "Request Body" body=""
	I1009 18:26:32.241956   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:32.242308   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:32.242358   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:32.741973   34792 type.go:168] "Request Body" body=""
	I1009 18:26:32.742037   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:32.742358   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:33.241033   34792 type.go:168] "Request Body" body=""
	I1009 18:26:33.241095   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:33.241444   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:33.741092   34792 type.go:168] "Request Body" body=""
	I1009 18:26:33.741183   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:33.741483   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:34.241043   34792 type.go:168] "Request Body" body=""
	I1009 18:26:34.241106   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:34.241473   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:34.741040   34792 type.go:168] "Request Body" body=""
	I1009 18:26:34.741103   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:34.741434   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:34.741487   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:35.241090   34792 type.go:168] "Request Body" body=""
	I1009 18:26:35.241193   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:35.241503   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:35.741438   34792 type.go:168] "Request Body" body=""
	I1009 18:26:35.741506   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:35.741812   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:36.241366   34792 type.go:168] "Request Body" body=""
	I1009 18:26:36.241429   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:36.241735   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:36.741315   34792 type.go:168] "Request Body" body=""
	I1009 18:26:36.741379   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:36.741698   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:36.741752   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:37.241310   34792 type.go:168] "Request Body" body=""
	I1009 18:26:37.241385   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:37.241689   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:37.741251   34792 type.go:168] "Request Body" body=""
	I1009 18:26:37.741329   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:37.741650   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:38.241235   34792 type.go:168] "Request Body" body=""
	I1009 18:26:38.241299   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:38.241604   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:38.741249   34792 type.go:168] "Request Body" body=""
	I1009 18:26:38.741311   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:38.741610   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:39.241542   34792 type.go:168] "Request Body" body=""
	I1009 18:26:39.241604   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:39.241903   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:39.241956   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:39.741468   34792 type.go:168] "Request Body" body=""
	I1009 18:26:39.741530   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:39.741834   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:40.241427   34792 type.go:168] "Request Body" body=""
	I1009 18:26:40.241499   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:40.241835   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:40.741723   34792 type.go:168] "Request Body" body=""
	I1009 18:26:40.741789   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:40.742120   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:41.241751   34792 type.go:168] "Request Body" body=""
	I1009 18:26:41.241818   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:41.242203   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:41.242264   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:41.741856   34792 type.go:168] "Request Body" body=""
	I1009 18:26:41.741921   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:41.742256   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:42.241895   34792 type.go:168] "Request Body" body=""
	I1009 18:26:42.241958   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:42.242315   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:42.741994   34792 type.go:168] "Request Body" body=""
	I1009 18:26:42.742065   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:42.742389   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:43.240973   34792 type.go:168] "Request Body" body=""
	I1009 18:26:43.241061   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:43.241393   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:43.740990   34792 type.go:168] "Request Body" body=""
	I1009 18:26:43.741062   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:43.741419   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:43.741468   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:44.241000   34792 type.go:168] "Request Body" body=""
	I1009 18:26:44.241064   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:44.241416   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:44.740980   34792 type.go:168] "Request Body" body=""
	I1009 18:26:44.741068   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:44.741391   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:45.241003   34792 type.go:168] "Request Body" body=""
	I1009 18:26:45.241071   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:45.241415   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:45.741236   34792 type.go:168] "Request Body" body=""
	I1009 18:26:45.741300   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:45.741605   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:45.741660   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:46.241187   34792 type.go:168] "Request Body" body=""
	I1009 18:26:46.241257   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:46.241559   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:46.741123   34792 type.go:168] "Request Body" body=""
	I1009 18:26:46.741200   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:46.741513   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:47.241090   34792 type.go:168] "Request Body" body=""
	I1009 18:26:47.241182   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:47.241488   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:47.741079   34792 type.go:168] "Request Body" body=""
	I1009 18:26:47.741166   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:47.741472   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:48.241093   34792 type.go:168] "Request Body" body=""
	I1009 18:26:48.241186   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:48.241592   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:48.241645   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:48.741196   34792 type.go:168] "Request Body" body=""
	I1009 18:26:48.741263   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:48.741567   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:49.241340   34792 type.go:168] "Request Body" body=""
	I1009 18:26:49.241413   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:49.241715   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:49.741320   34792 type.go:168] "Request Body" body=""
	I1009 18:26:49.741390   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:49.741693   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:50.241274   34792 type.go:168] "Request Body" body=""
	I1009 18:26:50.241356   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:50.241686   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:50.241739   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:50.741604   34792 type.go:168] "Request Body" body=""
	I1009 18:26:50.741672   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:50.741979   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:51.241631   34792 type.go:168] "Request Body" body=""
	I1009 18:26:51.241697   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:51.242059   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:51.741717   34792 type.go:168] "Request Body" body=""
	I1009 18:26:51.741781   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:51.742121   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:52.241772   34792 type.go:168] "Request Body" body=""
	I1009 18:26:52.241840   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:52.242193   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:52.242249   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:52.741892   34792 type.go:168] "Request Body" body=""
	I1009 18:26:52.741970   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:52.742329   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:53.241997   34792 type.go:168] "Request Body" body=""
	I1009 18:26:53.242075   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:53.242417   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:53.741024   34792 type.go:168] "Request Body" body=""
	I1009 18:26:53.741093   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:53.741440   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:54.241044   34792 type.go:168] "Request Body" body=""
	I1009 18:26:54.241125   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:54.241492   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:54.741067   34792 type.go:168] "Request Body" body=""
	I1009 18:26:54.741161   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:54.741529   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:54.741583   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:55.241129   34792 type.go:168] "Request Body" body=""
	I1009 18:26:55.241221   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:55.241609   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:55.741431   34792 type.go:168] "Request Body" body=""
	I1009 18:26:55.741496   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:55.741812   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:56.241424   34792 type.go:168] "Request Body" body=""
	I1009 18:26:56.241490   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:56.241796   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:56.741393   34792 type.go:168] "Request Body" body=""
	I1009 18:26:56.741462   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:56.741773   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:56.741826   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:57.241378   34792 type.go:168] "Request Body" body=""
	I1009 18:26:57.241453   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:57.241771   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:57.741379   34792 type.go:168] "Request Body" body=""
	I1009 18:26:57.741447   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:57.741762   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:58.241330   34792 type.go:168] "Request Body" body=""
	I1009 18:26:58.241413   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:58.241723   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:58.741322   34792 type.go:168] "Request Body" body=""
	I1009 18:26:58.741396   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:58.741713   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:26:59.241600   34792 type.go:168] "Request Body" body=""
	I1009 18:26:59.241669   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:59.241990   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:26:59.242043   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:26:59.741668   34792 type.go:168] "Request Body" body=""
	I1009 18:26:59.741732   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:26:59.742052   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:00.241717   34792 type.go:168] "Request Body" body=""
	I1009 18:27:00.241783   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:00.242095   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:00.741931   34792 type.go:168] "Request Body" body=""
	I1009 18:27:00.742008   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:00.742337   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:01.242007   34792 type.go:168] "Request Body" body=""
	I1009 18:27:01.242099   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:01.242479   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:01.242534   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:01.741056   34792 type.go:168] "Request Body" body=""
	I1009 18:27:01.741158   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:01.741495   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:02.241218   34792 type.go:168] "Request Body" body=""
	I1009 18:27:02.241281   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:02.241609   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:02.741259   34792 type.go:168] "Request Body" body=""
	I1009 18:27:02.741340   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:02.741682   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:03.241295   34792 type.go:168] "Request Body" body=""
	I1009 18:27:03.241359   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:03.241698   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:03.741242   34792 type.go:168] "Request Body" body=""
	I1009 18:27:03.741308   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:03.741628   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:03.741679   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:04.241208   34792 type.go:168] "Request Body" body=""
	I1009 18:27:04.241270   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:04.241627   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:04.741229   34792 type.go:168] "Request Body" body=""
	I1009 18:27:04.741287   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:04.741583   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:05.241255   34792 type.go:168] "Request Body" body=""
	I1009 18:27:05.241340   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:05.241742   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:05.741635   34792 type.go:168] "Request Body" body=""
	I1009 18:27:05.741703   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:05.742066   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:05.742130   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:06.241658   34792 type.go:168] "Request Body" body=""
	I1009 18:27:06.241731   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:06.242079   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:06.741854   34792 type.go:168] "Request Body" body=""
	I1009 18:27:06.741922   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:06.742243   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:07.241927   34792 type.go:168] "Request Body" body=""
	I1009 18:27:07.241997   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:07.242459   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:07.741045   34792 type.go:168] "Request Body" body=""
	I1009 18:27:07.741126   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:07.741466   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:08.241033   34792 type.go:168] "Request Body" body=""
	I1009 18:27:08.241100   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:08.241458   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:08.241511   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:08.741034   34792 type.go:168] "Request Body" body=""
	I1009 18:27:08.741096   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:08.741406   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:09.241378   34792 type.go:168] "Request Body" body=""
	I1009 18:27:09.241439   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:09.241764   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:09.741349   34792 type.go:168] "Request Body" body=""
	I1009 18:27:09.741417   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:09.741711   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:10.241285   34792 type.go:168] "Request Body" body=""
	I1009 18:27:10.241365   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:10.241692   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:10.241753   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:10.741690   34792 type.go:168] "Request Body" body=""
	I1009 18:27:10.741757   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:10.742128   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:11.241848   34792 type.go:168] "Request Body" body=""
	I1009 18:27:11.241913   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:11.242250   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:11.741958   34792 type.go:168] "Request Body" body=""
	I1009 18:27:11.742022   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:11.742364   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:12.240970   34792 type.go:168] "Request Body" body=""
	I1009 18:27:12.241079   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:12.241437   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:12.741083   34792 type.go:168] "Request Body" body=""
	I1009 18:27:12.741169   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:12.741518   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:12.741570   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:13.241130   34792 type.go:168] "Request Body" body=""
	I1009 18:27:13.241246   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:13.241579   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:13.741161   34792 type.go:168] "Request Body" body=""
	I1009 18:27:13.741231   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:13.741554   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:14.241185   34792 type.go:168] "Request Body" body=""
	I1009 18:27:14.241247   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:14.241557   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:14.741128   34792 type.go:168] "Request Body" body=""
	I1009 18:27:14.741223   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:14.741560   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:14.741616   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:15.241160   34792 type.go:168] "Request Body" body=""
	I1009 18:27:15.241231   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:15.241537   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:15.741362   34792 type.go:168] "Request Body" body=""
	I1009 18:27:15.741426   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:15.741731   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:16.241332   34792 type.go:168] "Request Body" body=""
	I1009 18:27:16.241395   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:16.241711   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:16.741290   34792 type.go:168] "Request Body" body=""
	I1009 18:27:16.741362   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:16.741691   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:16.741746   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:17.241296   34792 type.go:168] "Request Body" body=""
	I1009 18:27:17.241365   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:17.241677   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:17.741260   34792 type.go:168] "Request Body" body=""
	I1009 18:27:17.741330   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:17.741645   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:18.241233   34792 type.go:168] "Request Body" body=""
	I1009 18:27:18.241315   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:18.241649   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:18.741254   34792 type.go:168] "Request Body" body=""
	I1009 18:27:18.741327   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:18.741641   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:19.241576   34792 type.go:168] "Request Body" body=""
	I1009 18:27:19.241642   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:19.241965   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:19.242017   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:19.741671   34792 type.go:168] "Request Body" body=""
	I1009 18:27:19.741744   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:19.742057   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:20.241721   34792 type.go:168] "Request Body" body=""
	I1009 18:27:20.241782   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:20.242076   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:20.742009   34792 type.go:168] "Request Body" body=""
	I1009 18:27:20.742090   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:20.742453   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:21.241057   34792 type.go:168] "Request Body" body=""
	I1009 18:27:21.241122   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:21.241467   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:21.741089   34792 type.go:168] "Request Body" body=""
	I1009 18:27:21.741181   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:21.741490   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:21.741542   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:22.241108   34792 type.go:168] "Request Body" body=""
	I1009 18:27:22.241209   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:22.241541   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:22.741234   34792 type.go:168] "Request Body" body=""
	I1009 18:27:22.741302   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:22.741654   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:23.241319   34792 type.go:168] "Request Body" body=""
	I1009 18:27:23.241387   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:23.241701   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:23.741234   34792 type.go:168] "Request Body" body=""
	I1009 18:27:23.741296   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:23.741605   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:23.741658   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:24.241213   34792 type.go:168] "Request Body" body=""
	I1009 18:27:24.241289   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:24.241598   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:24.741228   34792 type.go:168] "Request Body" body=""
	I1009 18:27:24.741292   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:24.741613   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:25.241253   34792 type.go:168] "Request Body" body=""
	I1009 18:27:25.241322   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:25.241625   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:25.741545   34792 type.go:168] "Request Body" body=""
	I1009 18:27:25.741614   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:25.741927   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:25.742024   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:26.241505   34792 type.go:168] "Request Body" body=""
	I1009 18:27:26.241567   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:26.241878   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:26.741454   34792 type.go:168] "Request Body" body=""
	I1009 18:27:26.741518   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:26.741875   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:27.241441   34792 type.go:168] "Request Body" body=""
	I1009 18:27:27.241506   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:27.241818   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:27.741400   34792 type.go:168] "Request Body" body=""
	I1009 18:27:27.741470   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:27.741797   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:28.241401   34792 type.go:168] "Request Body" body=""
	I1009 18:27:28.241474   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:28.241808   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:28.241862   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:28.741402   34792 type.go:168] "Request Body" body=""
	I1009 18:27:28.741472   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:28.741806   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:29.241748   34792 type.go:168] "Request Body" body=""
	I1009 18:27:29.241819   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:29.242161   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:29.741821   34792 type.go:168] "Request Body" body=""
	I1009 18:27:29.741885   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:29.742231   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:30.241904   34792 type.go:168] "Request Body" body=""
	I1009 18:27:30.241974   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:30.242318   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:30.242382   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:30.741035   34792 type.go:168] "Request Body" body=""
	I1009 18:27:30.741108   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:30.741409   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:31.241068   34792 type.go:168] "Request Body" body=""
	I1009 18:27:31.241132   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:31.241479   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:31.741086   34792 type.go:168] "Request Body" body=""
	I1009 18:27:31.741176   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:31.741471   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:32.241219   34792 type.go:168] "Request Body" body=""
	I1009 18:27:32.241295   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:32.241610   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:32.741219   34792 type.go:168] "Request Body" body=""
	I1009 18:27:32.741298   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:32.741606   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:32.741661   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:33.241210   34792 type.go:168] "Request Body" body=""
	I1009 18:27:33.241276   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:33.241588   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:33.741182   34792 type.go:168] "Request Body" body=""
	I1009 18:27:33.741248   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:33.741547   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:34.241192   34792 type.go:168] "Request Body" body=""
	I1009 18:27:34.241262   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:34.241590   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:34.741212   34792 type.go:168] "Request Body" body=""
	I1009 18:27:34.741284   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:34.741609   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:35.241253   34792 type.go:168] "Request Body" body=""
	I1009 18:27:35.241323   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:35.241649   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:35.241703   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:35.741567   34792 type.go:168] "Request Body" body=""
	I1009 18:27:35.741632   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:35.741973   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:36.241654   34792 type.go:168] "Request Body" body=""
	I1009 18:27:36.241728   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:36.242025   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:36.741778   34792 type.go:168] "Request Body" body=""
	I1009 18:27:36.741844   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:36.742212   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:37.241852   34792 type.go:168] "Request Body" body=""
	I1009 18:27:37.241925   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:37.242276   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:37.242330   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:37.741978   34792 type.go:168] "Request Body" body=""
	I1009 18:27:37.742052   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:37.742377   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:38.240952   34792 type.go:168] "Request Body" body=""
	I1009 18:27:38.241027   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:38.241428   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:38.741115   34792 type.go:168] "Request Body" body=""
	I1009 18:27:38.741222   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:38.741569   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:39.241464   34792 type.go:168] "Request Body" body=""
	I1009 18:27:39.241531   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:39.241853   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:39.741475   34792 type.go:168] "Request Body" body=""
	I1009 18:27:39.741552   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:39.741888   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:39.741940   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:40.241482   34792 type.go:168] "Request Body" body=""
	I1009 18:27:40.241546   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:40.241865   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:40.741822   34792 type.go:168] "Request Body" body=""
	I1009 18:27:40.741912   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:40.742310   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:41.241924   34792 type.go:168] "Request Body" body=""
	I1009 18:27:41.241992   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:41.242352   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:41.742037   34792 type.go:168] "Request Body" body=""
	I1009 18:27:41.742123   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:41.742467   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:41.742533   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:42.241062   34792 type.go:168] "Request Body" body=""
	I1009 18:27:42.241131   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:42.241483   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:42.741199   34792 type.go:168] "Request Body" body=""
	I1009 18:27:42.741261   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:42.741576   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:43.241209   34792 type.go:168] "Request Body" body=""
	I1009 18:27:43.241285   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:43.241620   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:43.741257   34792 type.go:168] "Request Body" body=""
	I1009 18:27:43.741321   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:43.741675   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:44.241258   34792 type.go:168] "Request Body" body=""
	I1009 18:27:44.241325   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:44.241630   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:44.241684   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:44.741229   34792 type.go:168] "Request Body" body=""
	I1009 18:27:44.741292   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:44.741621   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:45.241009   34792 type.go:168] "Request Body" body=""
	I1009 18:27:45.241089   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:45.241464   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:45.741255   34792 type.go:168] "Request Body" body=""
	I1009 18:27:45.741321   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:45.741658   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:46.241261   34792 type.go:168] "Request Body" body=""
	I1009 18:27:46.241333   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:46.241687   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:46.241736   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:46.741271   34792 type.go:168] "Request Body" body=""
	I1009 18:27:46.741338   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:46.741695   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:47.241266   34792 type.go:168] "Request Body" body=""
	I1009 18:27:47.241341   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:47.241666   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:47.741243   34792 type.go:168] "Request Body" body=""
	I1009 18:27:47.741310   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:47.741653   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:48.241251   34792 type.go:168] "Request Body" body=""
	I1009 18:27:48.241342   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:48.241651   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:48.741262   34792 type.go:168] "Request Body" body=""
	I1009 18:27:48.741328   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:48.741647   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:48.741699   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:49.241692   34792 type.go:168] "Request Body" body=""
	I1009 18:27:49.241772   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:49.242116   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:49.741779   34792 type.go:168] "Request Body" body=""
	I1009 18:27:49.741846   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:49.742256   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:50.241914   34792 type.go:168] "Request Body" body=""
	I1009 18:27:50.241978   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:50.242357   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:50.741207   34792 type.go:168] "Request Body" body=""
	I1009 18:27:50.741284   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:50.741645   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:51.241236   34792 type.go:168] "Request Body" body=""
	I1009 18:27:51.241313   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:51.241642   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:51.241696   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:51.741256   34792 type.go:168] "Request Body" body=""
	I1009 18:27:51.741385   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:51.741740   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:52.241321   34792 type.go:168] "Request Body" body=""
	I1009 18:27:52.241392   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:52.241724   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:52.741315   34792 type.go:168] "Request Body" body=""
	I1009 18:27:52.741382   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:52.741729   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:53.241330   34792 type.go:168] "Request Body" body=""
	I1009 18:27:53.241398   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:53.241736   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:53.241797   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:53.741402   34792 type.go:168] "Request Body" body=""
	I1009 18:27:53.741465   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:53.741821   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:54.241418   34792 type.go:168] "Request Body" body=""
	I1009 18:27:54.241482   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:54.241803   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:54.741399   34792 type.go:168] "Request Body" body=""
	I1009 18:27:54.741462   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:54.741794   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:55.241395   34792 type.go:168] "Request Body" body=""
	I1009 18:27:55.241460   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:55.241801   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:55.241851   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:55.741689   34792 type.go:168] "Request Body" body=""
	I1009 18:27:55.741763   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:55.742091   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:56.241733   34792 type.go:168] "Request Body" body=""
	I1009 18:27:56.241801   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:56.242128   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:56.741823   34792 type.go:168] "Request Body" body=""
	I1009 18:27:56.741896   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:56.742277   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:57.241950   34792 type.go:168] "Request Body" body=""
	I1009 18:27:57.242025   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:57.242395   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1009 18:27:57.242451   34792 node_ready.go:55] error getting node "functional-753440" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-753440": dial tcp 192.168.49.2:8441: connect: connection refused
	I1009 18:27:57.741025   34792 type.go:168] "Request Body" body=""
	I1009 18:27:57.741093   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:57.741454   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:58.241127   34792 type.go:168] "Request Body" body=""
	I1009 18:27:58.241225   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:58.241560   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:58.741208   34792 type.go:168] "Request Body" body=""
	I1009 18:27:58.741281   34792 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-753440" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1009 18:27:58.741640   34792 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1009 18:27:59.241113   34792 node_ready.go:38] duration metric: took 6m0.000256287s for node "functional-753440" to be "Ready" ...
	I1009 18:27:59.244464   34792 out.go:203] 
	W1009 18:27:59.246567   34792 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 18:27:59.246590   34792 out.go:285] * 
	W1009 18:27:59.248293   34792 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:27:59.250105   34792 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.14509242Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=42cee86d-2d7a-4cec-9d74-65293f5a0cff name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.145124298Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=42cee86d-2d7a-4cec-9d74-65293f5a0cff name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.542737234Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=065175c8-91bf-4012-b9b3-5d9f72220ddb name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.544713622Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=b5c624b3-1d60-42b4-8984-d4a17802b148 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.545742903Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-753440/kube-controller-manager" id=ee9dc85b-d56f-424a-970b-1b05c2c11a8c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.546007765Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.549577694Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.549972908Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.566608991Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ee9dc85b-d56f-424a-970b-1b05c2c11a8c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.567967013Z" level=info msg="createCtr: deleting container ID b3498f589bd49e0b9c940b743b5094ba76aa060907c421c898d65866b3194079 from idIndex" id=ee9dc85b-d56f-424a-970b-1b05c2c11a8c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.568007083Z" level=info msg="createCtr: removing container b3498f589bd49e0b9c940b743b5094ba76aa060907c421c898d65866b3194079" id=ee9dc85b-d56f-424a-970b-1b05c2c11a8c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.568042928Z" level=info msg="createCtr: deleting container b3498f589bd49e0b9c940b743b5094ba76aa060907c421c898d65866b3194079 from storage" id=ee9dc85b-d56f-424a-970b-1b05c2c11a8c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.5700798Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-753440_kube-system_ddd5b817e547272bbbe5e6f0c16b8e98_0" id=ee9dc85b-d56f-424a-970b-1b05c2c11a8c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:09 functional-753440 crio[2938]: time="2025-10-09T18:28:09.608865345Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=ffe9e9e8-4dc1-4383-bccb-ffffa17ab717 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:28:12 functional-753440 crio[2938]: time="2025-10-09T18:28:12.543525972Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=cd5d77f3-a79c-4141-b918-597a9585f5a9 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:28:12 functional-753440 crio[2938]: time="2025-10-09T18:28:12.544416676Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=f2015387-be70-4126-954b-98fa88012101 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:28:12 functional-753440 crio[2938]: time="2025-10-09T18:28:12.545401318Z" level=info msg="Creating container: kube-system/etcd-functional-753440/etcd" id=78482f0b-ef96-4fcd-8132-94eb6e8e890d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:12 functional-753440 crio[2938]: time="2025-10-09T18:28:12.545662537Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:28:12 functional-753440 crio[2938]: time="2025-10-09T18:28:12.549281176Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:28:12 functional-753440 crio[2938]: time="2025-10-09T18:28:12.54972537Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:28:12 functional-753440 crio[2938]: time="2025-10-09T18:28:12.567164407Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=78482f0b-ef96-4fcd-8132-94eb6e8e890d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:12 functional-753440 crio[2938]: time="2025-10-09T18:28:12.568932603Z" level=info msg="createCtr: deleting container ID 1ed215b54b65a9515698d98ec07b677fb151acd8801c3d8b58a171bba54bacaf from idIndex" id=78482f0b-ef96-4fcd-8132-94eb6e8e890d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:12 functional-753440 crio[2938]: time="2025-10-09T18:28:12.568982028Z" level=info msg="createCtr: removing container 1ed215b54b65a9515698d98ec07b677fb151acd8801c3d8b58a171bba54bacaf" id=78482f0b-ef96-4fcd-8132-94eb6e8e890d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:12 functional-753440 crio[2938]: time="2025-10-09T18:28:12.569030393Z" level=info msg="createCtr: deleting container 1ed215b54b65a9515698d98ec07b677fb151acd8801c3d8b58a171bba54bacaf from storage" id=78482f0b-ef96-4fcd-8132-94eb6e8e890d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:28:12 functional-753440 crio[2938]: time="2025-10-09T18:28:12.571480303Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-753440_kube-system_894f77eb6f96f2cc2bf4bdca611e7cdb_0" id=78482f0b-ef96-4fcd-8132-94eb6e8e890d name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:28:13.230160    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:28:13.230652    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:28:13.232500    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:28:13.232948    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:28:13.234583    5424 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:28:13 up  1:10,  0 user,  load average: 0.07, 0.09, 0.10
	Linux functional-753440 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 18:28:06 functional-753440 kubelet[1796]:  > podSandboxID="3bfe74c8d570ecc37f6892435ddc21354701de89899703d3fea256f249b5032e"
	Oct 09 18:28:06 functional-753440 kubelet[1796]: E1009 18:28:06.575983    1796 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:28:06 functional-753440 kubelet[1796]:         container kube-apiserver start failed in pod kube-apiserver-functional-753440_kube-system(d8200e5d2f7672a0974c7d953c472e15): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:28:06 functional-753440 kubelet[1796]:  > logger="UnhandledError"
	Oct 09 18:28:06 functional-753440 kubelet[1796]: E1009 18:28:06.576024    1796 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-753440" podUID="d8200e5d2f7672a0974c7d953c472e15"
	Oct 09 18:28:07 functional-753440 kubelet[1796]: E1009 18:28:07.054944    1796 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-753440.186ce57ba0b4bd78\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-753440.186ce57ba0b4bd78  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-753440,UID:functional-753440,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-753440 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-753440,},FirstTimestamp:2025-10-09 18:17:53.534958968 +0000 UTC m=+0.381579824,LastTimestamp:2025-10-09 18:17:53.536403063 +0000 UTC m=+0.383023919,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Reporting
Instance:functional-753440,}"
	Oct 09 18:28:08 functional-753440 kubelet[1796]: E1009 18:28:08.228302    1796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-753440?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 09 18:28:08 functional-753440 kubelet[1796]: I1009 18:28:08.428720    1796 kubelet_node_status.go:75] "Attempting to register node" node="functional-753440"
	Oct 09 18:28:08 functional-753440 kubelet[1796]: E1009 18:28:08.429128    1796 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-753440"
	Oct 09 18:28:09 functional-753440 kubelet[1796]: E1009 18:28:09.542285    1796 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753440\" not found" node="functional-753440"
	Oct 09 18:28:09 functional-753440 kubelet[1796]: E1009 18:28:09.570410    1796 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:28:09 functional-753440 kubelet[1796]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:28:09 functional-753440 kubelet[1796]:  > podSandboxID="a0f669ac9226ee4ac7b841aacfe05ece4235d10b02fe7bb351eab32cadb9e24d"
	Oct 09 18:28:09 functional-753440 kubelet[1796]: E1009 18:28:09.570509    1796 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:28:09 functional-753440 kubelet[1796]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-753440_kube-system(ddd5b817e547272bbbe5e6f0c16b8e98): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:28:09 functional-753440 kubelet[1796]:  > logger="UnhandledError"
	Oct 09 18:28:09 functional-753440 kubelet[1796]: E1009 18:28:09.570540    1796 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-753440" podUID="ddd5b817e547272bbbe5e6f0c16b8e98"
	Oct 09 18:28:12 functional-753440 kubelet[1796]: E1009 18:28:12.543022    1796 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753440\" not found" node="functional-753440"
	Oct 09 18:28:12 functional-753440 kubelet[1796]: E1009 18:28:12.571808    1796 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:28:12 functional-753440 kubelet[1796]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:28:12 functional-753440 kubelet[1796]:  > podSandboxID="b2bb9a720dde4343bb6d68e21981701423cf9ba8fc536a4b16c3a5d7282c9e5b"
	Oct 09 18:28:12 functional-753440 kubelet[1796]: E1009 18:28:12.571899    1796 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:28:12 functional-753440 kubelet[1796]:         container etcd start failed in pod etcd-functional-753440_kube-system(894f77eb6f96f2cc2bf4bdca611e7cdb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:28:12 functional-753440 kubelet[1796]:  > logger="UnhandledError"
	Oct 09 18:28:12 functional-753440 kubelet[1796]: E1009 18:28:12.571927    1796 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-753440" podUID="894f77eb6f96f2cc2bf4bdca611e7cdb"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753440 -n functional-753440
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753440 -n functional-753440: exit status 2 (302.415399ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753440" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (2.17s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (736.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-753440 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-753440 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (12m14.162911057s)

                                                
                                                
-- stdout --
	* [functional-753440] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-753440" primary control-plane node in "functional-753440" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500920961s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000193088s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000216272s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000612564s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000969803s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000410729s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000637307s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000528535s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000969803s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000410729s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000637307s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000528535s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-amd64 start -p functional-753440 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:776: restart took 12m14.165217973s for "functional-753440" cluster.
I1009 18:40:28.243310   14880 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753440
helpers_test.go:243: (dbg) docker inspect functional-753440:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205",
	        "Created": "2025-10-09T18:13:38.612842612Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 29511,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:13:38.64668907Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/hostname",
	        "HostsPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/hosts",
	        "LogPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205-json.log",
	        "Name": "/functional-753440",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753440:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-753440",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205",
	                "LowerDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753440",
	                "Source": "/var/lib/docker/volumes/functional-753440/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753440",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753440",
	                "name.minikube.sigs.k8s.io": "functional-753440",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d81e656cb7fd298b6be7b84ddafb7e6d0b2df1b9904e1c444b24eb780385409d",
	            "SandboxKey": "/var/run/docker/netns/d81e656cb7fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753440": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:52:a9:f3:ce:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d69cee380b2506f35d197ee18a95b90b110e191b547e1220873c5484ffc92ad3",
	                    "EndpointID": "2f780bc31b7359d4036c8b32e09c7f7657923ca8c46e8392506706282465c3ec",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753440",
	                        "694bf539948e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-753440 -n functional-753440
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-753440 -n functional-753440: exit status 2 (302.751502ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 logs -n 25
helpers_test.go:260: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ unpause │ nospam-663194 --log_dir /tmp/nospam-663194 unpause                                                            │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ unpause │ nospam-663194 --log_dir /tmp/nospam-663194 unpause                                                            │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ unpause │ nospam-663194 --log_dir /tmp/nospam-663194 unpause                                                            │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ stop    │ nospam-663194 --log_dir /tmp/nospam-663194 stop                                                               │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ stop    │ nospam-663194 --log_dir /tmp/nospam-663194 stop                                                               │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ stop    │ nospam-663194 --log_dir /tmp/nospam-663194 stop                                                               │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ delete  │ -p nospam-663194                                                                                              │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ start   │ -p functional-753440 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │                     │
	│ start   │ -p functional-753440 --alsologtostderr -v=8                                                                   │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:21 UTC │                     │
	│ cache   │ functional-753440 cache add registry.k8s.io/pause:3.1                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ functional-753440 cache add registry.k8s.io/pause:3.3                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ functional-753440 cache add registry.k8s.io/pause:latest                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ functional-753440 cache add minikube-local-cache-test:functional-753440                                       │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ functional-753440 cache delete minikube-local-cache-test:functional-753440                                    │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ ssh     │ functional-753440 ssh sudo crictl images                                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ ssh     │ functional-753440 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ ssh     │ functional-753440 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │                     │
	│ cache   │ functional-753440 cache reload                                                                                │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ ssh     │ functional-753440 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ kubectl │ functional-753440 kubectl -- --context functional-753440 get pods                                             │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │                     │
	│ start   │ -p functional-753440 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:28:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:28:14.121358   41166 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:28:14.121581   41166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:28:14.121584   41166 out.go:374] Setting ErrFile to fd 2...
	I1009 18:28:14.121587   41166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:28:14.121762   41166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:28:14.122238   41166 out.go:368] Setting JSON to false
	I1009 18:28:14.123079   41166 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4242,"bootTime":1760030252,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:28:14.123169   41166 start.go:141] virtualization: kvm guest
	I1009 18:28:14.126034   41166 out.go:179] * [functional-753440] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:28:14.127592   41166 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:28:14.127614   41166 notify.go:220] Checking for updates...
	I1009 18:28:14.130226   41166 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:28:14.131542   41166 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:28:14.132869   41166 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:28:14.134010   41166 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:28:14.135272   41166 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:28:14.137002   41166 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:28:14.137147   41166 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:28:14.160624   41166 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:28:14.160747   41166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:28:14.216904   41166 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-09 18:28:14.207579982 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:28:14.216988   41166 docker.go:318] overlay module found
	I1009 18:28:14.218985   41166 out.go:179] * Using the docker driver based on existing profile
	I1009 18:28:14.220343   41166 start.go:305] selected driver: docker
	I1009 18:28:14.220350   41166 start.go:925] validating driver "docker" against &{Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:28:14.220421   41166 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:28:14.220493   41166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:28:14.276259   41166 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-09 18:28:14.266635533 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:28:14.276841   41166 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:28:14.276862   41166 cni.go:84] Creating CNI manager for ""
	I1009 18:28:14.276912   41166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:28:14.276975   41166 start.go:349] cluster config:
	{Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:28:14.279613   41166 out.go:179] * Starting "functional-753440" primary control-plane node in "functional-753440" cluster
	I1009 18:28:14.281054   41166 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:28:14.282608   41166 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:28:14.283987   41166 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:28:14.284021   41166 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:28:14.284028   41166 cache.go:64] Caching tarball of preloaded images
	I1009 18:28:14.284084   41166 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:28:14.284156   41166 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:28:14.284167   41166 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:28:14.284262   41166 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/config.json ...
	I1009 18:28:14.304989   41166 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 18:28:14.304998   41166 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 18:28:14.305012   41166 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:28:14.305037   41166 start.go:360] acquireMachinesLock for functional-753440: {Name:mka6dd10318522f9d68a16550e4b04812fa22004 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:28:14.305103   41166 start.go:364] duration metric: took 53.763µs to acquireMachinesLock for "functional-753440"
	I1009 18:28:14.305117   41166 start.go:96] Skipping create...Using existing machine configuration
	I1009 18:28:14.305123   41166 fix.go:54] fixHost starting: 
	I1009 18:28:14.305350   41166 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
	I1009 18:28:14.322441   41166 fix.go:112] recreateIfNeeded on functional-753440: state=Running err=<nil>
	W1009 18:28:14.322475   41166 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 18:28:14.324442   41166 out.go:252] * Updating the running docker "functional-753440" container ...
	I1009 18:28:14.324473   41166 machine.go:93] provisionDockerMachine start ...
	I1009 18:28:14.324533   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:14.341338   41166 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:14.341548   41166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:28:14.341554   41166 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:28:14.486226   41166 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753440
	
	I1009 18:28:14.486250   41166 ubuntu.go:182] provisioning hostname "functional-753440"
	I1009 18:28:14.486345   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:14.504505   41166 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:14.504708   41166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:28:14.504715   41166 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-753440 && echo "functional-753440" | sudo tee /etc/hostname
	I1009 18:28:14.659579   41166 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753440
	
	I1009 18:28:14.659644   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:14.677783   41166 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:14.677973   41166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:28:14.677983   41166 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753440' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753440/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753440' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:28:14.823918   41166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:28:14.823946   41166 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 18:28:14.823965   41166 ubuntu.go:190] setting up certificates
	I1009 18:28:14.823972   41166 provision.go:84] configureAuth start
	I1009 18:28:14.824015   41166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753440
	I1009 18:28:14.841567   41166 provision.go:143] copyHostCerts
	I1009 18:28:14.841617   41166 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 18:28:14.841630   41166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:28:14.841693   41166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 18:28:14.841773   41166 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 18:28:14.841776   41166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:28:14.841800   41166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 18:28:14.841852   41166 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 18:28:14.841854   41166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:28:14.841874   41166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 18:28:14.841914   41166 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.functional-753440 san=[127.0.0.1 192.168.49.2 functional-753440 localhost minikube]
	I1009 18:28:14.981751   41166 provision.go:177] copyRemoteCerts
	I1009 18:28:14.981793   41166 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:28:14.981823   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:14.999896   41166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:28:15.102707   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 18:28:15.120896   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 18:28:15.138889   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:28:15.156869   41166 provision.go:87] duration metric: took 332.885748ms to configureAuth
	I1009 18:28:15.156885   41166 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:28:15.157034   41166 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:28:15.157151   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:15.175195   41166 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:15.175399   41166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:28:15.175409   41166 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:28:15.452446   41166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:28:15.452465   41166 machine.go:96] duration metric: took 1.127985417s to provisionDockerMachine
	I1009 18:28:15.452477   41166 start.go:293] postStartSetup for "functional-753440" (driver="docker")
	I1009 18:28:15.452491   41166 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:28:15.452568   41166 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:28:15.452629   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:15.470937   41166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:28:15.575864   41166 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:28:15.579955   41166 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:28:15.579971   41166 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:28:15.579990   41166 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 18:28:15.580053   41166 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 18:28:15.580152   41166 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 18:28:15.580226   41166 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/test/nested/copy/14880/hosts -> hosts in /etc/test/nested/copy/14880
	I1009 18:28:15.580265   41166 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/14880
	I1009 18:28:15.588947   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:28:15.607328   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/test/nested/copy/14880/hosts --> /etc/test/nested/copy/14880/hosts (40 bytes)
	I1009 18:28:15.625331   41166 start.go:296] duration metric: took 172.840814ms for postStartSetup
	I1009 18:28:15.625414   41166 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:28:15.625450   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:15.644868   41166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:28:15.745460   41166 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:28:15.750036   41166 fix.go:56] duration metric: took 1.444904813s for fixHost
	I1009 18:28:15.750054   41166 start.go:83] releasing machines lock for "functional-753440", held for 1.444944565s
	I1009 18:28:15.750113   41166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753440
	I1009 18:28:15.768383   41166 ssh_runner.go:195] Run: cat /version.json
	I1009 18:28:15.768426   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:15.768462   41166 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:28:15.768509   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:15.787244   41166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:28:15.788794   41166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:28:15.887419   41166 ssh_runner.go:195] Run: systemctl --version
	I1009 18:28:15.939267   41166 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:28:15.975115   41166 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:28:15.980039   41166 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:28:15.980121   41166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:28:15.988843   41166 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 18:28:15.988855   41166 start.go:495] detecting cgroup driver to use...
	I1009 18:28:15.988896   41166 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:28:15.988937   41166 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:28:16.003980   41166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:28:16.017315   41166 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:28:16.017382   41166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:28:16.032779   41166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:28:16.045881   41166 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:28:16.126678   41166 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:28:16.213883   41166 docker.go:234] disabling docker service ...
	I1009 18:28:16.213927   41166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:28:16.229180   41166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:28:16.242501   41166 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:28:16.328471   41166 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:28:16.418726   41166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:28:16.432452   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:28:16.447044   41166 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:28:16.447090   41166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:16.456711   41166 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 18:28:16.456763   41166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:16.466740   41166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:16.476505   41166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:16.485804   41166 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:28:16.494457   41166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:16.504131   41166 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:16.513460   41166 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:16.522986   41166 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:28:16.531036   41166 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:28:16.539288   41166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:28:16.625799   41166 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:28:16.734227   41166 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:28:16.734392   41166 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:28:16.738753   41166 start.go:563] Will wait 60s for crictl version
	I1009 18:28:16.738810   41166 ssh_runner.go:195] Run: which crictl
	I1009 18:28:16.742485   41166 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:28:16.767659   41166 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:28:16.767722   41166 ssh_runner.go:195] Run: crio --version
	I1009 18:28:16.796602   41166 ssh_runner.go:195] Run: crio --version
	I1009 18:28:16.826463   41166 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:28:16.827844   41166 cli_runner.go:164] Run: docker network inspect functional-753440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:28:16.845122   41166 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:28:16.851283   41166 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1009 18:28:16.852593   41166 kubeadm.go:883] updating cluster {Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:28:16.852703   41166 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:28:16.852758   41166 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:28:16.885854   41166 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:28:16.885865   41166 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:28:16.885914   41166 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:28:16.911537   41166 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:28:16.911549   41166 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:28:16.911554   41166 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1009 18:28:16.911659   41166 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-753440 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:28:16.911716   41166 ssh_runner.go:195] Run: crio config
	I1009 18:28:16.959392   41166 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1009 18:28:16.959415   41166 cni.go:84] Creating CNI manager for ""
	I1009 18:28:16.959431   41166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:28:16.959447   41166 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:28:16.959474   41166 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753440 NodeName:functional-753440 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:28:16.959581   41166 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-753440"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:28:16.959637   41166 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:28:16.967720   41166 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:28:16.967786   41166 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:28:16.975557   41166 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 18:28:16.988463   41166 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:28:17.001726   41166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1009 18:28:17.014711   41166 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 18:28:17.018916   41166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:28:17.102967   41166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:28:17.116133   41166 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440 for IP: 192.168.49.2
	I1009 18:28:17.116168   41166 certs.go:195] generating shared ca certs ...
	I1009 18:28:17.116186   41166 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:17.116310   41166 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 18:28:17.116344   41166 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 18:28:17.116350   41166 certs.go:257] generating profile certs ...
	I1009 18:28:17.116439   41166 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.key
	I1009 18:28:17.116473   41166 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.key.01289d3a
	I1009 18:28:17.116504   41166 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.key
	I1009 18:28:17.116599   41166 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 18:28:17.116623   41166 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 18:28:17.116628   41166 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:28:17.116647   41166 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 18:28:17.116699   41166 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:28:17.116718   41166 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 18:28:17.116754   41166 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:28:17.117319   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:28:17.135881   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:28:17.153983   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:28:17.171867   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:28:17.189721   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 18:28:17.208056   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:28:17.226995   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:28:17.245251   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:28:17.263239   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 18:28:17.281041   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 18:28:17.298701   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:28:17.316541   41166 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:28:17.329669   41166 ssh_runner.go:195] Run: openssl version
	I1009 18:28:17.335820   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:28:17.344631   41166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:17.348564   41166 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:17.348610   41166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:17.382973   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:28:17.391446   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 18:28:17.399936   41166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 18:28:17.403644   41166 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:28:17.403697   41166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 18:28:17.438115   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 18:28:17.446527   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 18:28:17.455201   41166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 18:28:17.459043   41166 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:28:17.459093   41166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 18:28:17.494448   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:28:17.503208   41166 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:28:17.507381   41166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 18:28:17.542560   41166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 18:28:17.577279   41166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 18:28:17.612414   41166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 18:28:17.648669   41166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 18:28:17.684353   41166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 18:28:17.718697   41166 kubeadm.go:400] StartCluster: {Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:28:17.718762   41166 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:28:17.718816   41166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:28:17.747722   41166 cri.go:89] found id: ""
	I1009 18:28:17.747771   41166 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:28:17.755951   41166 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 18:28:17.755970   41166 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 18:28:17.756013   41166 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 18:28:17.763739   41166 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:28:17.764201   41166 kubeconfig.go:125] found "functional-753440" server: "https://192.168.49.2:8441"
	I1009 18:28:17.765394   41166 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 18:28:17.773512   41166 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-09 18:13:46.132659514 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-09 18:28:17.012910366 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1009 18:28:17.773526   41166 kubeadm.go:1160] stopping kube-system containers ...
	I1009 18:28:17.773536   41166 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 18:28:17.773573   41166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:28:17.801424   41166 cri.go:89] found id: ""
	I1009 18:28:17.801491   41166 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 18:28:17.844900   41166 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:28:17.853365   41166 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5635 Oct  9 18:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct  9 18:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct  9 18:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct  9 18:17 /etc/kubernetes/scheduler.conf
	
	I1009 18:28:17.853413   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1009 18:28:17.861284   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1009 18:28:17.869531   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:28:17.869582   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:28:17.877552   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1009 18:28:17.885384   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:28:17.885430   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:28:17.893514   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1009 18:28:17.901554   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:28:17.901605   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:28:17.910046   41166 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:28:17.918503   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:28:17.960612   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:28:19.029109   41166 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.068473628s)
	I1009 18:28:19.029180   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:28:19.195034   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:28:19.243702   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:28:19.294305   41166 api_server.go:52] waiting for apiserver process to appear ...
	I1009 18:28:19.294364   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:19.794527   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:20.295201   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:20.794575   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:21.295315   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:21.795156   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:22.294825   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:22.794676   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:23.295341   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:23.795290   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:24.295084   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:24.794558   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:25.295301   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:25.794886   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:26.295362   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:26.795204   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:27.295068   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:27.794501   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:28.295278   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:28.795020   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:29.294945   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:29.795382   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:30.294824   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:30.794608   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:31.295203   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:31.795244   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:32.294545   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:32.794712   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:33.294432   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:33.795152   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:34.294924   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:34.794572   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:35.295260   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:35.794912   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:36.294546   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:36.795240   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:37.294721   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:37.794468   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:38.295324   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:38.795118   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:39.295123   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:39.795377   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:40.294883   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:40.795163   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:41.294810   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:41.794568   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:42.295334   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:42.795216   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:43.294867   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:43.794631   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:44.294584   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:44.795416   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:45.294988   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:45.795459   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:46.295344   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:46.794912   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:47.294535   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:47.795297   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:48.294813   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:48.794435   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:49.295044   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:49.794820   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:50.294561   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:50.795171   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:51.295301   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:51.794820   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:52.295356   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:52.795166   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:53.294824   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:53.795465   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:54.295177   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:54.794443   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:55.294528   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:55.794977   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:56.294481   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:56.795276   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:57.295436   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:57.795235   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:58.294498   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:58.794950   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:59.294720   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:59.794600   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:00.295262   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:00.794624   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:01.294757   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:01.794835   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:02.294745   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:02.795101   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:03.295356   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:03.794515   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:04.294776   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:04.794940   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:05.295069   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:05.794648   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:06.294527   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:06.794749   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:07.294659   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:07.795339   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:08.295340   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:08.795175   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:09.294617   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:09.795133   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:10.295346   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:10.795313   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:11.295322   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:11.794750   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:12.294795   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:12.794516   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:13.295074   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:13.794456   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:14.294872   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:14.794437   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:15.294584   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:15.794709   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:16.295308   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:16.795334   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:17.294662   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:17.795191   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:18.294594   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:18.794871   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:19.295378   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:19.295433   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:19.321387   41166 cri.go:89] found id: ""
	I1009 18:29:19.321402   41166 logs.go:282] 0 containers: []
	W1009 18:29:19.321411   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:19.321418   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:19.321468   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:19.348366   41166 cri.go:89] found id: ""
	I1009 18:29:19.348380   41166 logs.go:282] 0 containers: []
	W1009 18:29:19.348387   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:19.348391   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:19.348435   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:19.374894   41166 cri.go:89] found id: ""
	I1009 18:29:19.374906   41166 logs.go:282] 0 containers: []
	W1009 18:29:19.374912   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:19.374916   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:19.374955   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:19.401088   41166 cri.go:89] found id: ""
	I1009 18:29:19.401106   41166 logs.go:282] 0 containers: []
	W1009 18:29:19.401114   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:19.401121   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:19.401191   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:19.428021   41166 cri.go:89] found id: ""
	I1009 18:29:19.428033   41166 logs.go:282] 0 containers: []
	W1009 18:29:19.428043   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:19.428047   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:19.428105   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:19.454576   41166 cri.go:89] found id: ""
	I1009 18:29:19.454590   41166 logs.go:282] 0 containers: []
	W1009 18:29:19.454595   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:19.454599   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:19.454639   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:19.480743   41166 cri.go:89] found id: ""
	I1009 18:29:19.480760   41166 logs.go:282] 0 containers: []
	W1009 18:29:19.480767   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:19.480774   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:19.480783   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:19.509728   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:19.509743   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:19.578764   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:19.578781   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:19.590528   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:19.590544   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:19.646752   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:19.639577    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.640309    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.641990    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.642451    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.643983    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:19.639577    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.640309    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.641990    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.642451    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.643983    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:19.646773   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:19.646784   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:22.208868   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:22.219498   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:22.219549   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:22.245808   41166 cri.go:89] found id: ""
	I1009 18:29:22.245825   41166 logs.go:282] 0 containers: []
	W1009 18:29:22.245833   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:22.245839   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:22.245884   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:22.271240   41166 cri.go:89] found id: ""
	I1009 18:29:22.271253   41166 logs.go:282] 0 containers: []
	W1009 18:29:22.271259   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:22.271263   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:22.271301   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:22.299626   41166 cri.go:89] found id: ""
	I1009 18:29:22.299641   41166 logs.go:282] 0 containers: []
	W1009 18:29:22.299650   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:22.299656   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:22.299699   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:22.326461   41166 cri.go:89] found id: ""
	I1009 18:29:22.326473   41166 logs.go:282] 0 containers: []
	W1009 18:29:22.326479   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:22.326484   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:22.326526   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:22.352237   41166 cri.go:89] found id: ""
	I1009 18:29:22.352253   41166 logs.go:282] 0 containers: []
	W1009 18:29:22.352264   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:22.352268   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:22.352316   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:22.378255   41166 cri.go:89] found id: ""
	I1009 18:29:22.378268   41166 logs.go:282] 0 containers: []
	W1009 18:29:22.378276   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:22.378297   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:22.378351   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:22.403983   41166 cri.go:89] found id: ""
	I1009 18:29:22.403999   41166 logs.go:282] 0 containers: []
	W1009 18:29:22.404006   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:22.404013   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:22.404024   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:22.470710   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:22.470727   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:22.482584   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:22.482599   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:22.536359   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:22.529981    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.530412    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.531972    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.532353    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.533814    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:22.529981    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.530412    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.531972    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.532353    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.533814    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:22.536380   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:22.536394   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:22.601517   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:22.601533   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:25.128918   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:25.139722   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:25.139766   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:25.165463   41166 cri.go:89] found id: ""
	I1009 18:29:25.165478   41166 logs.go:282] 0 containers: []
	W1009 18:29:25.165486   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:25.165490   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:25.165537   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:25.190387   41166 cri.go:89] found id: ""
	I1009 18:29:25.190400   41166 logs.go:282] 0 containers: []
	W1009 18:29:25.190407   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:25.190411   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:25.190460   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:25.216675   41166 cri.go:89] found id: ""
	I1009 18:29:25.216690   41166 logs.go:282] 0 containers: []
	W1009 18:29:25.216698   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:25.216703   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:25.216747   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:25.242179   41166 cri.go:89] found id: ""
	I1009 18:29:25.242191   41166 logs.go:282] 0 containers: []
	W1009 18:29:25.242197   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:25.242202   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:25.242248   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:25.267486   41166 cri.go:89] found id: ""
	I1009 18:29:25.267502   41166 logs.go:282] 0 containers: []
	W1009 18:29:25.267511   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:25.267517   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:25.267568   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:25.297914   41166 cri.go:89] found id: ""
	I1009 18:29:25.297930   41166 logs.go:282] 0 containers: []
	W1009 18:29:25.297939   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:25.297945   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:25.298000   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:25.328702   41166 cri.go:89] found id: ""
	I1009 18:29:25.328718   41166 logs.go:282] 0 containers: []
	W1009 18:29:25.328727   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:25.328736   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:25.328747   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:25.395115   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:25.395130   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:25.407227   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:25.407245   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:25.462374   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:25.455561    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.456085    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.457650    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.458100    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.459563    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:25.455561    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.456085    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.457650    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.458100    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.459563    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:25.462400   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:25.462410   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:25.525388   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:25.525409   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:28.053225   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:28.063873   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:28.063918   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:28.088014   41166 cri.go:89] found id: ""
	I1009 18:29:28.088030   41166 logs.go:282] 0 containers: []
	W1009 18:29:28.088038   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:28.088045   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:28.088091   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:28.114133   41166 cri.go:89] found id: ""
	I1009 18:29:28.114163   41166 logs.go:282] 0 containers: []
	W1009 18:29:28.114172   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:28.114177   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:28.114221   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:28.138995   41166 cri.go:89] found id: ""
	I1009 18:29:28.139007   41166 logs.go:282] 0 containers: []
	W1009 18:29:28.139017   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:28.139022   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:28.139072   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:28.163909   41166 cri.go:89] found id: ""
	I1009 18:29:28.163925   41166 logs.go:282] 0 containers: []
	W1009 18:29:28.163984   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:28.163991   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:28.164032   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:28.190078   41166 cri.go:89] found id: ""
	I1009 18:29:28.190091   41166 logs.go:282] 0 containers: []
	W1009 18:29:28.190096   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:28.190101   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:28.190171   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:28.215236   41166 cri.go:89] found id: ""
	I1009 18:29:28.215251   41166 logs.go:282] 0 containers: []
	W1009 18:29:28.215260   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:28.215265   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:28.215315   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:28.241659   41166 cri.go:89] found id: ""
	I1009 18:29:28.241675   41166 logs.go:282] 0 containers: []
	W1009 18:29:28.241684   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:28.241692   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:28.241701   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:28.312258   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:28.312275   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:28.323979   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:28.323994   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:28.380524   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:28.373568    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.374186    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.375759    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.376203    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.377825    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:28.373568    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.374186    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.375759    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.376203    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.377825    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:28.380538   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:28.380547   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:28.442571   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:28.442588   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:30.972438   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:30.983019   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:30.983078   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:31.007563   41166 cri.go:89] found id: ""
	I1009 18:29:31.007577   41166 logs.go:282] 0 containers: []
	W1009 18:29:31.007585   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:31.007591   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:31.007665   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:31.033297   41166 cri.go:89] found id: ""
	I1009 18:29:31.033312   41166 logs.go:282] 0 containers: []
	W1009 18:29:31.033320   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:31.033326   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:31.033381   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:31.058733   41166 cri.go:89] found id: ""
	I1009 18:29:31.058748   41166 logs.go:282] 0 containers: []
	W1009 18:29:31.058756   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:31.058761   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:31.058815   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:31.084119   41166 cri.go:89] found id: ""
	I1009 18:29:31.084133   41166 logs.go:282] 0 containers: []
	W1009 18:29:31.084156   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:31.084162   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:31.084206   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:31.109429   41166 cri.go:89] found id: ""
	I1009 18:29:31.109442   41166 logs.go:282] 0 containers: []
	W1009 18:29:31.109448   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:31.109452   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:31.109510   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:31.135299   41166 cri.go:89] found id: ""
	I1009 18:29:31.135312   41166 logs.go:282] 0 containers: []
	W1009 18:29:31.135322   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:31.135328   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:31.135413   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:31.162606   41166 cri.go:89] found id: ""
	I1009 18:29:31.162621   41166 logs.go:282] 0 containers: []
	W1009 18:29:31.162636   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:31.162643   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:31.162652   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:31.230506   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:31.230556   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:31.241809   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:31.241825   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:31.297388   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:31.290563    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.291088    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.292644    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.293059    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.294666    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:31.290563    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.291088    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.292644    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.293059    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.294666    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:31.297398   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:31.297413   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:31.361486   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:31.361502   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:33.891238   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:33.902005   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:33.902060   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:33.927598   41166 cri.go:89] found id: ""
	I1009 18:29:33.927612   41166 logs.go:282] 0 containers: []
	W1009 18:29:33.927618   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:33.927622   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:33.927673   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:33.952038   41166 cri.go:89] found id: ""
	I1009 18:29:33.952053   41166 logs.go:282] 0 containers: []
	W1009 18:29:33.952061   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:33.952066   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:33.952145   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:33.976526   41166 cri.go:89] found id: ""
	I1009 18:29:33.976541   41166 logs.go:282] 0 containers: []
	W1009 18:29:33.976549   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:33.976556   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:33.976610   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:34.003219   41166 cri.go:89] found id: ""
	I1009 18:29:34.003234   41166 logs.go:282] 0 containers: []
	W1009 18:29:34.003242   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:34.003247   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:34.003330   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:34.029762   41166 cri.go:89] found id: ""
	I1009 18:29:34.029775   41166 logs.go:282] 0 containers: []
	W1009 18:29:34.029781   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:34.029785   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:34.029840   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:34.054085   41166 cri.go:89] found id: ""
	I1009 18:29:34.054097   41166 logs.go:282] 0 containers: []
	W1009 18:29:34.054107   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:34.054112   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:34.054179   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:34.080890   41166 cri.go:89] found id: ""
	I1009 18:29:34.080903   41166 logs.go:282] 0 containers: []
	W1009 18:29:34.080909   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:34.080915   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:34.080926   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:34.110411   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:34.110426   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:34.181234   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:34.181254   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:34.192758   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:34.192772   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:34.248477   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:34.241375    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.241950    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.243535    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.244000    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.245566    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:34.241375    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.241950    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.243535    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.244000    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.245566    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:34.248486   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:34.248496   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:36.816158   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:36.827291   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:36.827356   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:36.851760   41166 cri.go:89] found id: ""
	I1009 18:29:36.851775   41166 logs.go:282] 0 containers: []
	W1009 18:29:36.851783   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:36.851789   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:36.851843   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:36.877217   41166 cri.go:89] found id: ""
	I1009 18:29:36.877231   41166 logs.go:282] 0 containers: []
	W1009 18:29:36.877238   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:36.877243   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:36.877284   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:36.902388   41166 cri.go:89] found id: ""
	I1009 18:29:36.902401   41166 logs.go:282] 0 containers: []
	W1009 18:29:36.902407   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:36.902411   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:36.902450   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:36.927658   41166 cri.go:89] found id: ""
	I1009 18:29:36.927673   41166 logs.go:282] 0 containers: []
	W1009 18:29:36.927679   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:36.927683   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:36.927735   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:36.952663   41166 cri.go:89] found id: ""
	I1009 18:29:36.952681   41166 logs.go:282] 0 containers: []
	W1009 18:29:36.952688   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:36.952692   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:36.952731   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:36.977753   41166 cri.go:89] found id: ""
	I1009 18:29:36.977768   41166 logs.go:282] 0 containers: []
	W1009 18:29:36.977774   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:36.977779   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:36.977819   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:37.002782   41166 cri.go:89] found id: ""
	I1009 18:29:37.002796   41166 logs.go:282] 0 containers: []
	W1009 18:29:37.002801   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:37.002807   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:37.002816   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:37.069710   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:37.069726   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:37.081854   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:37.081876   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:37.136826   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:37.130447    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.130883    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.132410    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.132756    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.134175    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:37.130447    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.130883    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.132410    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.132756    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.134175    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:37.136835   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:37.136844   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:37.201251   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:37.201270   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:39.729692   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:39.740542   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:39.740597   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:39.766240   41166 cri.go:89] found id: ""
	I1009 18:29:39.766255   41166 logs.go:282] 0 containers: []
	W1009 18:29:39.766263   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:39.766269   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:39.766330   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:39.792273   41166 cri.go:89] found id: ""
	I1009 18:29:39.792289   41166 logs.go:282] 0 containers: []
	W1009 18:29:39.792298   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:39.792304   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:39.792360   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:39.818498   41166 cri.go:89] found id: ""
	I1009 18:29:39.818513   41166 logs.go:282] 0 containers: []
	W1009 18:29:39.818521   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:39.818526   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:39.818580   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:39.844118   41166 cri.go:89] found id: ""
	I1009 18:29:39.844131   41166 logs.go:282] 0 containers: []
	W1009 18:29:39.844155   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:39.844161   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:39.844204   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:39.870849   41166 cri.go:89] found id: ""
	I1009 18:29:39.870862   41166 logs.go:282] 0 containers: []
	W1009 18:29:39.870868   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:39.870872   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:39.870911   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:39.896931   41166 cri.go:89] found id: ""
	I1009 18:29:39.896944   41166 logs.go:282] 0 containers: []
	W1009 18:29:39.896949   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:39.896954   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:39.896996   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:39.923519   41166 cri.go:89] found id: ""
	I1009 18:29:39.923531   41166 logs.go:282] 0 containers: []
	W1009 18:29:39.923537   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:39.923544   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:39.923553   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:39.990863   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:39.990880   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:40.002519   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:40.002534   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:40.059328   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:40.052153    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.052750    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.054419    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.054856    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.056426    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:40.052153    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.052750    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.054419    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.054856    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.056426    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:40.059339   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:40.059349   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:40.125328   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:40.125345   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:42.656004   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:42.666452   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:42.666495   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:42.691012   41166 cri.go:89] found id: ""
	I1009 18:29:42.691027   41166 logs.go:282] 0 containers: []
	W1009 18:29:42.691037   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:42.691043   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:42.691086   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:42.715311   41166 cri.go:89] found id: ""
	I1009 18:29:42.715327   41166 logs.go:282] 0 containers: []
	W1009 18:29:42.715335   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:42.715346   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:42.715385   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:42.741564   41166 cri.go:89] found id: ""
	I1009 18:29:42.741577   41166 logs.go:282] 0 containers: []
	W1009 18:29:42.741584   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:42.741590   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:42.741639   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:42.765961   41166 cri.go:89] found id: ""
	I1009 18:29:42.765974   41166 logs.go:282] 0 containers: []
	W1009 18:29:42.765980   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:42.765985   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:42.766027   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:42.792117   41166 cri.go:89] found id: ""
	I1009 18:29:42.792129   41166 logs.go:282] 0 containers: []
	W1009 18:29:42.792149   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:42.792155   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:42.792208   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:42.817726   41166 cri.go:89] found id: ""
	I1009 18:29:42.817738   41166 logs.go:282] 0 containers: []
	W1009 18:29:42.817745   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:42.817749   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:42.817799   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:42.842806   41166 cri.go:89] found id: ""
	I1009 18:29:42.842823   41166 logs.go:282] 0 containers: []
	W1009 18:29:42.842829   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:42.842836   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:42.842850   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:42.908734   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:42.908751   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:42.919767   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:42.919780   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:42.975159   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:42.968444    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.969012    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.970635    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.971181    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.972729    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:42.968444    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.969012    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.970635    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.971181    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.972729    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:42.975170   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:42.975181   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:43.041463   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:43.041480   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:45.571837   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:45.582376   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:45.582431   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:45.608198   41166 cri.go:89] found id: ""
	I1009 18:29:45.608211   41166 logs.go:282] 0 containers: []
	W1009 18:29:45.608217   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:45.608221   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:45.608286   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:45.635099   41166 cri.go:89] found id: ""
	I1009 18:29:45.635112   41166 logs.go:282] 0 containers: []
	W1009 18:29:45.635118   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:45.635126   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:45.635182   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:45.660701   41166 cri.go:89] found id: ""
	I1009 18:29:45.660714   41166 logs.go:282] 0 containers: []
	W1009 18:29:45.660720   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:45.660725   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:45.660765   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:45.686907   41166 cri.go:89] found id: ""
	I1009 18:29:45.686920   41166 logs.go:282] 0 containers: []
	W1009 18:29:45.686926   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:45.686931   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:45.686981   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:45.712880   41166 cri.go:89] found id: ""
	I1009 18:29:45.712893   41166 logs.go:282] 0 containers: []
	W1009 18:29:45.712899   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:45.712902   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:45.712941   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:45.738114   41166 cri.go:89] found id: ""
	I1009 18:29:45.738128   41166 logs.go:282] 0 containers: []
	W1009 18:29:45.738147   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:45.738155   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:45.738200   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:45.764157   41166 cri.go:89] found id: ""
	I1009 18:29:45.764172   41166 logs.go:282] 0 containers: []
	W1009 18:29:45.764178   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:45.764187   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:45.764196   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:45.793189   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:45.793204   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:45.861447   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:45.861463   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:45.872975   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:45.872988   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:45.928792   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:45.921633    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.922319    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.923962    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.924449    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.926072    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:45.921633    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.922319    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.923962    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.924449    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.926072    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:45.928810   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:45.928820   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:48.494959   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:48.505724   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:48.505766   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:48.531052   41166 cri.go:89] found id: ""
	I1009 18:29:48.531087   41166 logs.go:282] 0 containers: []
	W1009 18:29:48.531099   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:48.531103   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:48.531167   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:48.555479   41166 cri.go:89] found id: ""
	I1009 18:29:48.555492   41166 logs.go:282] 0 containers: []
	W1009 18:29:48.555498   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:48.555502   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:48.555543   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:48.581427   41166 cri.go:89] found id: ""
	I1009 18:29:48.581444   41166 logs.go:282] 0 containers: []
	W1009 18:29:48.581452   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:48.581460   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:48.581509   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:48.607162   41166 cri.go:89] found id: ""
	I1009 18:29:48.607176   41166 logs.go:282] 0 containers: []
	W1009 18:29:48.607182   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:48.607187   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:48.607235   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:48.632033   41166 cri.go:89] found id: ""
	I1009 18:29:48.632049   41166 logs.go:282] 0 containers: []
	W1009 18:29:48.632058   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:48.632064   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:48.632106   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:48.657205   41166 cri.go:89] found id: ""
	I1009 18:29:48.657218   41166 logs.go:282] 0 containers: []
	W1009 18:29:48.657224   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:48.657229   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:48.657280   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:48.681952   41166 cri.go:89] found id: ""
	I1009 18:29:48.681965   41166 logs.go:282] 0 containers: []
	W1009 18:29:48.681970   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:48.681976   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:48.681986   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:48.751441   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:48.751459   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:48.763252   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:48.763266   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:48.819401   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:48.812637    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.813245    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.814774    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.815273    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.816784    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:48.812637    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.813245    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.814774    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.815273    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.816784    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:48.819413   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:48.819426   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:48.882158   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:48.882176   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:51.412646   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:51.423570   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:51.423613   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:51.450043   41166 cri.go:89] found id: ""
	I1009 18:29:51.450058   41166 logs.go:282] 0 containers: []
	W1009 18:29:51.450076   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:51.450081   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:51.450130   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:51.474654   41166 cri.go:89] found id: ""
	I1009 18:29:51.474669   41166 logs.go:282] 0 containers: []
	W1009 18:29:51.474676   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:51.474683   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:51.474721   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:51.500060   41166 cri.go:89] found id: ""
	I1009 18:29:51.500074   41166 logs.go:282] 0 containers: []
	W1009 18:29:51.500079   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:51.500083   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:51.500125   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:51.525095   41166 cri.go:89] found id: ""
	I1009 18:29:51.525110   41166 logs.go:282] 0 containers: []
	W1009 18:29:51.525117   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:51.525128   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:51.525192   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:51.550903   41166 cri.go:89] found id: ""
	I1009 18:29:51.550915   41166 logs.go:282] 0 containers: []
	W1009 18:29:51.550921   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:51.550925   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:51.550963   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:51.576021   41166 cri.go:89] found id: ""
	I1009 18:29:51.576039   41166 logs.go:282] 0 containers: []
	W1009 18:29:51.576045   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:51.576050   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:51.576101   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:51.601302   41166 cri.go:89] found id: ""
	I1009 18:29:51.601331   41166 logs.go:282] 0 containers: []
	W1009 18:29:51.601337   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:51.601345   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:51.601357   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:51.673218   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:51.673234   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:51.684673   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:51.684688   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:51.740747   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:51.733129    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.733652    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.736069    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.736560    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.738067    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:51.733129    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.733652    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.736069    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.736560    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.738067    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:51.740756   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:51.740765   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:51.804392   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:51.804410   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:54.334647   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:54.345214   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:54.345259   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:54.371054   41166 cri.go:89] found id: ""
	I1009 18:29:54.371070   41166 logs.go:282] 0 containers: []
	W1009 18:29:54.371077   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:54.371081   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:54.371123   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:54.397390   41166 cri.go:89] found id: ""
	I1009 18:29:54.397406   41166 logs.go:282] 0 containers: []
	W1009 18:29:54.397414   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:54.397420   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:54.397469   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:54.423212   41166 cri.go:89] found id: ""
	I1009 18:29:54.423225   41166 logs.go:282] 0 containers: []
	W1009 18:29:54.423231   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:54.423235   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:54.423277   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:54.449723   41166 cri.go:89] found id: ""
	I1009 18:29:54.449738   41166 logs.go:282] 0 containers: []
	W1009 18:29:54.449747   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:54.449753   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:54.449794   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:54.476976   41166 cri.go:89] found id: ""
	I1009 18:29:54.476994   41166 logs.go:282] 0 containers: []
	W1009 18:29:54.476999   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:54.477004   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:54.477056   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:54.502387   41166 cri.go:89] found id: ""
	I1009 18:29:54.502409   41166 logs.go:282] 0 containers: []
	W1009 18:29:54.502419   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:54.502425   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:54.502471   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:54.528021   41166 cri.go:89] found id: ""
	I1009 18:29:54.528037   41166 logs.go:282] 0 containers: []
	W1009 18:29:54.528045   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:54.528053   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:54.528062   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:54.596551   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:54.596569   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:54.607908   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:54.607921   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:54.663274   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:54.655349    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.655928    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.658342    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.658895    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.660440    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:54.655349    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.655928    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.658342    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.658895    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.660440    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:54.663284   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:54.663296   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:54.724548   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:54.724565   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:57.253959   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:57.264749   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:57.264793   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:57.292216   41166 cri.go:89] found id: ""
	I1009 18:29:57.292234   41166 logs.go:282] 0 containers: []
	W1009 18:29:57.292244   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:57.292252   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:57.292322   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:57.320628   41166 cri.go:89] found id: ""
	I1009 18:29:57.320644   41166 logs.go:282] 0 containers: []
	W1009 18:29:57.320657   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:57.320663   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:57.320711   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:57.347524   41166 cri.go:89] found id: ""
	I1009 18:29:57.347541   41166 logs.go:282] 0 containers: []
	W1009 18:29:57.347549   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:57.347555   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:57.347599   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:57.374005   41166 cri.go:89] found id: ""
	I1009 18:29:57.374021   41166 logs.go:282] 0 containers: []
	W1009 18:29:57.374029   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:57.374034   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:57.374080   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:57.398685   41166 cri.go:89] found id: ""
	I1009 18:29:57.398700   41166 logs.go:282] 0 containers: []
	W1009 18:29:57.398706   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:57.398710   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:57.398758   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:57.424224   41166 cri.go:89] found id: ""
	I1009 18:29:57.424237   41166 logs.go:282] 0 containers: []
	W1009 18:29:57.424243   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:57.424247   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:57.424298   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:57.449118   41166 cri.go:89] found id: ""
	I1009 18:29:57.449144   41166 logs.go:282] 0 containers: []
	W1009 18:29:57.449153   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:57.449161   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:57.449170   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:57.477726   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:57.477741   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:57.549189   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:57.549206   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:57.560914   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:57.560933   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:57.615954   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:57.609197    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.609718    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.611273    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.611750    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.613311    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:57.609197    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.609718    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.611273    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.611750    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.613311    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:57.615970   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:57.615980   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:00.177763   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:00.188584   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:00.188628   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:00.214820   41166 cri.go:89] found id: ""
	I1009 18:30:00.214835   41166 logs.go:282] 0 containers: []
	W1009 18:30:00.214844   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:00.214851   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:00.214895   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:00.239376   41166 cri.go:89] found id: ""
	I1009 18:30:00.239393   41166 logs.go:282] 0 containers: []
	W1009 18:30:00.239401   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:00.239407   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:00.239447   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:00.265476   41166 cri.go:89] found id: ""
	I1009 18:30:00.265492   41166 logs.go:282] 0 containers: []
	W1009 18:30:00.265500   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:00.265506   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:00.265556   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:00.291131   41166 cri.go:89] found id: ""
	I1009 18:30:00.291158   41166 logs.go:282] 0 containers: []
	W1009 18:30:00.291167   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:00.291174   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:00.291226   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:00.316623   41166 cri.go:89] found id: ""
	I1009 18:30:00.316636   41166 logs.go:282] 0 containers: []
	W1009 18:30:00.316642   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:00.316646   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:00.316693   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:00.341462   41166 cri.go:89] found id: ""
	I1009 18:30:00.341476   41166 logs.go:282] 0 containers: []
	W1009 18:30:00.341485   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:00.341490   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:00.341531   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:00.366641   41166 cri.go:89] found id: ""
	I1009 18:30:00.366657   41166 logs.go:282] 0 containers: []
	W1009 18:30:00.366663   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:00.366670   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:00.366679   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:00.397505   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:00.397539   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:00.469540   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:00.469557   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:00.481466   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:00.481480   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:00.537449   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:00.530572    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.531116    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.532663    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.533175    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.534723    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:00.530572    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.531116    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.532663    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.533175    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.534723    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:00.537457   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:00.537466   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:03.107457   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:03.117969   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:03.118030   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:03.144661   41166 cri.go:89] found id: ""
	I1009 18:30:03.144676   41166 logs.go:282] 0 containers: []
	W1009 18:30:03.144684   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:03.144689   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:03.144731   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:03.169819   41166 cri.go:89] found id: ""
	I1009 18:30:03.169832   41166 logs.go:282] 0 containers: []
	W1009 18:30:03.169838   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:03.169842   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:03.169880   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:03.195252   41166 cri.go:89] found id: ""
	I1009 18:30:03.195264   41166 logs.go:282] 0 containers: []
	W1009 18:30:03.195271   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:03.195276   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:03.195319   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:03.221154   41166 cri.go:89] found id: ""
	I1009 18:30:03.221169   41166 logs.go:282] 0 containers: []
	W1009 18:30:03.221176   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:03.221181   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:03.221222   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:03.247656   41166 cri.go:89] found id: ""
	I1009 18:30:03.247670   41166 logs.go:282] 0 containers: []
	W1009 18:30:03.247676   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:03.247680   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:03.247736   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:03.273363   41166 cri.go:89] found id: ""
	I1009 18:30:03.273378   41166 logs.go:282] 0 containers: []
	W1009 18:30:03.273386   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:03.273391   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:03.273439   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:03.297383   41166 cri.go:89] found id: ""
	I1009 18:30:03.297399   41166 logs.go:282] 0 containers: []
	W1009 18:30:03.297407   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:03.297415   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:03.297426   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:03.327096   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:03.327110   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:03.396551   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:03.396569   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:03.408005   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:03.408020   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:03.462643   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:03.456283    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.456846    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.458452    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.458867    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.459996    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:03.456283    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.456846    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.458452    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.458867    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.459996    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:03.462656   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:03.462667   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:06.023381   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:06.034110   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:06.034175   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:06.059176   41166 cri.go:89] found id: ""
	I1009 18:30:06.059191   41166 logs.go:282] 0 containers: []
	W1009 18:30:06.059197   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:06.059201   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:06.059261   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:06.085110   41166 cri.go:89] found id: ""
	I1009 18:30:06.085126   41166 logs.go:282] 0 containers: []
	W1009 18:30:06.085146   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:06.085154   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:06.085211   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:06.110722   41166 cri.go:89] found id: ""
	I1009 18:30:06.110738   41166 logs.go:282] 0 containers: []
	W1009 18:30:06.110747   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:06.110753   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:06.110806   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:06.136728   41166 cri.go:89] found id: ""
	I1009 18:30:06.136744   41166 logs.go:282] 0 containers: []
	W1009 18:30:06.136752   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:06.136758   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:06.136815   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:06.162322   41166 cri.go:89] found id: ""
	I1009 18:30:06.162337   41166 logs.go:282] 0 containers: []
	W1009 18:30:06.162345   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:06.162351   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:06.162409   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:06.189203   41166 cri.go:89] found id: ""
	I1009 18:30:06.189217   41166 logs.go:282] 0 containers: []
	W1009 18:30:06.189225   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:06.189230   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:06.189374   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:06.215767   41166 cri.go:89] found id: ""
	I1009 18:30:06.215781   41166 logs.go:282] 0 containers: []
	W1009 18:30:06.215790   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:06.215798   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:06.215811   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:06.286131   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:06.286154   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:06.297884   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:06.297899   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:06.354614   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:06.347511    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.348070    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.349662    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.350175    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.351714    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:06.347511    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.348070    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.349662    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.350175    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.351714    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:06.354625   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:06.354634   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:06.421245   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:06.421263   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:08.950561   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:08.961412   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:08.961461   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:08.985056   41166 cri.go:89] found id: ""
	I1009 18:30:08.985073   41166 logs.go:282] 0 containers: []
	W1009 18:30:08.985081   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:08.985086   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:08.985155   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:09.010161   41166 cri.go:89] found id: ""
	I1009 18:30:09.010177   41166 logs.go:282] 0 containers: []
	W1009 18:30:09.010185   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:09.010190   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:09.010240   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:09.035006   41166 cri.go:89] found id: ""
	I1009 18:30:09.035021   41166 logs.go:282] 0 containers: []
	W1009 18:30:09.035030   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:09.035035   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:09.035079   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:09.059807   41166 cri.go:89] found id: ""
	I1009 18:30:09.059822   41166 logs.go:282] 0 containers: []
	W1009 18:30:09.059831   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:09.059836   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:09.059877   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:09.085467   41166 cri.go:89] found id: ""
	I1009 18:30:09.085482   41166 logs.go:282] 0 containers: []
	W1009 18:30:09.085490   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:09.085495   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:09.085536   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:09.110808   41166 cri.go:89] found id: ""
	I1009 18:30:09.110821   41166 logs.go:282] 0 containers: []
	W1009 18:30:09.110826   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:09.110831   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:09.110869   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:09.135842   41166 cri.go:89] found id: ""
	I1009 18:30:09.135854   41166 logs.go:282] 0 containers: []
	W1009 18:30:09.135860   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:09.135867   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:09.135875   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:09.195931   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:09.195948   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:09.225362   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:09.225375   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:09.296888   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:09.296905   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:09.309206   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:09.309223   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:09.365940   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:09.358751    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.359361    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.360926    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.361520    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.363120    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:09.358751    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.359361    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.360926    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.361520    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.363120    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:11.867608   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:11.878320   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:11.878362   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:11.904080   41166 cri.go:89] found id: ""
	I1009 18:30:11.904094   41166 logs.go:282] 0 containers: []
	W1009 18:30:11.904103   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:11.904109   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:11.904175   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:11.930291   41166 cri.go:89] found id: ""
	I1009 18:30:11.930308   41166 logs.go:282] 0 containers: []
	W1009 18:30:11.930327   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:11.930332   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:11.930372   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:11.955946   41166 cri.go:89] found id: ""
	I1009 18:30:11.955959   41166 logs.go:282] 0 containers: []
	W1009 18:30:11.955965   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:11.955970   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:11.956022   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:11.981169   41166 cri.go:89] found id: ""
	I1009 18:30:11.981184   41166 logs.go:282] 0 containers: []
	W1009 18:30:11.981190   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:11.981197   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:11.981254   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:12.006868   41166 cri.go:89] found id: ""
	I1009 18:30:12.006882   41166 logs.go:282] 0 containers: []
	W1009 18:30:12.006890   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:12.006896   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:12.006950   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:12.033045   41166 cri.go:89] found id: ""
	I1009 18:30:12.033062   41166 logs.go:282] 0 containers: []
	W1009 18:30:12.033070   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:12.033076   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:12.033123   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:12.059215   41166 cri.go:89] found id: ""
	I1009 18:30:12.059228   41166 logs.go:282] 0 containers: []
	W1009 18:30:12.059233   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:12.059240   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:12.059249   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:12.088610   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:12.088630   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:12.156730   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:12.156750   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:12.168340   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:12.168354   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:12.224955   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:12.217733    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.218350    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.220045    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.220517    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.222048    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:12.217733    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.218350    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.220045    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.220517    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.222048    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:12.224965   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:12.224974   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:14.790502   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:14.801228   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:14.801285   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:14.828449   41166 cri.go:89] found id: ""
	I1009 18:30:14.828469   41166 logs.go:282] 0 containers: []
	W1009 18:30:14.828478   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:14.828486   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:14.828539   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:14.854655   41166 cri.go:89] found id: ""
	I1009 18:30:14.854672   41166 logs.go:282] 0 containers: []
	W1009 18:30:14.854681   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:14.854687   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:14.854730   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:14.880081   41166 cri.go:89] found id: ""
	I1009 18:30:14.880103   41166 logs.go:282] 0 containers: []
	W1009 18:30:14.880110   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:14.880119   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:14.880182   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:14.906543   41166 cri.go:89] found id: ""
	I1009 18:30:14.906556   41166 logs.go:282] 0 containers: []
	W1009 18:30:14.906562   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:14.906567   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:14.906607   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:14.932338   41166 cri.go:89] found id: ""
	I1009 18:30:14.932354   41166 logs.go:282] 0 containers: []
	W1009 18:30:14.932360   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:14.932365   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:14.932417   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:14.959648   41166 cri.go:89] found id: ""
	I1009 18:30:14.959661   41166 logs.go:282] 0 containers: []
	W1009 18:30:14.959666   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:14.959670   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:14.959722   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:14.985626   41166 cri.go:89] found id: ""
	I1009 18:30:14.985642   41166 logs.go:282] 0 containers: []
	W1009 18:30:14.985651   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:14.985657   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:14.985667   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:15.059129   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:15.059150   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:15.070684   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:15.070698   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:15.127441   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:15.120544    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.121101    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.122649    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.123113    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.124615    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:15.120544    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.121101    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.122649    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.123113    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.124615    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:15.127451   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:15.127462   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:15.188736   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:15.188755   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:17.720548   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:17.731158   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:17.731199   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:17.756463   41166 cri.go:89] found id: ""
	I1009 18:30:17.756478   41166 logs.go:282] 0 containers: []
	W1009 18:30:17.756485   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:17.756489   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:17.756532   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:17.780776   41166 cri.go:89] found id: ""
	I1009 18:30:17.780792   41166 logs.go:282] 0 containers: []
	W1009 18:30:17.780799   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:17.780804   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:17.780845   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:17.805635   41166 cri.go:89] found id: ""
	I1009 18:30:17.805648   41166 logs.go:282] 0 containers: []
	W1009 18:30:17.805654   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:17.805658   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:17.805700   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:17.832060   41166 cri.go:89] found id: ""
	I1009 18:30:17.832074   41166 logs.go:282] 0 containers: []
	W1009 18:30:17.832079   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:17.832084   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:17.832125   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:17.859215   41166 cri.go:89] found id: ""
	I1009 18:30:17.859231   41166 logs.go:282] 0 containers: []
	W1009 18:30:17.859240   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:17.859248   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:17.859299   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:17.884007   41166 cri.go:89] found id: ""
	I1009 18:30:17.884021   41166 logs.go:282] 0 containers: []
	W1009 18:30:17.884027   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:17.884031   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:17.884073   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:17.908524   41166 cri.go:89] found id: ""
	I1009 18:30:17.908537   41166 logs.go:282] 0 containers: []
	W1009 18:30:17.908543   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:17.908550   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:17.908559   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:17.974071   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:17.974088   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:17.985794   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:17.985809   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:18.042658   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:18.035698    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.036247    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.037804    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.038378    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.039940    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:18.035698    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.036247    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.037804    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.038378    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.039940    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:18.042678   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:18.042688   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:18.104183   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:18.104201   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:20.634002   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:20.645000   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:20.645074   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:20.671295   41166 cri.go:89] found id: ""
	I1009 18:30:20.671309   41166 logs.go:282] 0 containers: []
	W1009 18:30:20.671320   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:20.671325   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:20.671370   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:20.699380   41166 cri.go:89] found id: ""
	I1009 18:30:20.699393   41166 logs.go:282] 0 containers: []
	W1009 18:30:20.699399   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:20.699404   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:20.699508   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:20.728459   41166 cri.go:89] found id: ""
	I1009 18:30:20.728483   41166 logs.go:282] 0 containers: []
	W1009 18:30:20.728490   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:20.728502   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:20.728571   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:20.755606   41166 cri.go:89] found id: ""
	I1009 18:30:20.755626   41166 logs.go:282] 0 containers: []
	W1009 18:30:20.755637   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:20.755643   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:20.755704   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:20.783272   41166 cri.go:89] found id: ""
	I1009 18:30:20.783285   41166 logs.go:282] 0 containers: []
	W1009 18:30:20.783291   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:20.783295   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:20.783338   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:20.810985   41166 cri.go:89] found id: ""
	I1009 18:30:20.810998   41166 logs.go:282] 0 containers: []
	W1009 18:30:20.811005   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:20.811009   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:20.811090   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:20.838557   41166 cri.go:89] found id: ""
	I1009 18:30:20.838573   41166 logs.go:282] 0 containers: []
	W1009 18:30:20.838580   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:20.838588   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:20.838597   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:20.868656   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:20.868669   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:20.940019   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:20.940041   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:20.952293   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:20.952307   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:21.010202   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:21.003172    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.003783    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.005520    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.006014    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.007633    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:21.003172    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.003783    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.005520    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.006014    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.007633    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:21.010215   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:21.010228   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:23.575003   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:23.585670   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:23.585721   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:23.611187   41166 cri.go:89] found id: ""
	I1009 18:30:23.611202   41166 logs.go:282] 0 containers: []
	W1009 18:30:23.611208   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:23.611216   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:23.611267   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:23.636952   41166 cri.go:89] found id: ""
	I1009 18:30:23.636966   41166 logs.go:282] 0 containers: []
	W1009 18:30:23.636972   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:23.636977   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:23.637018   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:23.661266   41166 cri.go:89] found id: ""
	I1009 18:30:23.661282   41166 logs.go:282] 0 containers: []
	W1009 18:30:23.661289   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:23.661294   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:23.661343   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:23.687560   41166 cri.go:89] found id: ""
	I1009 18:30:23.687573   41166 logs.go:282] 0 containers: []
	W1009 18:30:23.687578   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:23.687583   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:23.687637   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:23.712015   41166 cri.go:89] found id: ""
	I1009 18:30:23.712031   41166 logs.go:282] 0 containers: []
	W1009 18:30:23.712040   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:23.712046   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:23.712103   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:23.738106   41166 cri.go:89] found id: ""
	I1009 18:30:23.738120   41166 logs.go:282] 0 containers: []
	W1009 18:30:23.738126   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:23.738130   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:23.738191   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:23.764275   41166 cri.go:89] found id: ""
	I1009 18:30:23.764288   41166 logs.go:282] 0 containers: []
	W1009 18:30:23.764307   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:23.764314   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:23.764322   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:23.775354   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:23.775367   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:23.831862   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:23.824872    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.825499    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.827105    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.827605    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.829326    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:23.824872    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.825499    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.827105    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.827605    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.829326    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:23.831884   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:23.831893   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:23.894598   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:23.894614   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:23.922715   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:23.922731   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:26.494758   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:26.505984   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:26.506076   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:26.532013   41166 cri.go:89] found id: ""
	I1009 18:30:26.532029   41166 logs.go:282] 0 containers: []
	W1009 18:30:26.532037   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:26.532042   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:26.532088   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:26.558247   41166 cri.go:89] found id: ""
	I1009 18:30:26.558278   41166 logs.go:282] 0 containers: []
	W1009 18:30:26.558286   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:26.558290   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:26.558335   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:26.583466   41166 cri.go:89] found id: ""
	I1009 18:30:26.583479   41166 logs.go:282] 0 containers: []
	W1009 18:30:26.583485   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:26.583495   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:26.583536   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:26.611101   41166 cri.go:89] found id: ""
	I1009 18:30:26.611114   41166 logs.go:282] 0 containers: []
	W1009 18:30:26.611126   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:26.611131   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:26.611199   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:26.636533   41166 cri.go:89] found id: ""
	I1009 18:30:26.636547   41166 logs.go:282] 0 containers: []
	W1009 18:30:26.636553   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:26.636557   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:26.636594   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:26.661023   41166 cri.go:89] found id: ""
	I1009 18:30:26.661039   41166 logs.go:282] 0 containers: []
	W1009 18:30:26.661048   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:26.661055   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:26.661103   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:26.686499   41166 cri.go:89] found id: ""
	I1009 18:30:26.686511   41166 logs.go:282] 0 containers: []
	W1009 18:30:26.686518   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:26.686524   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:26.686533   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:26.750968   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:26.750986   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:26.762679   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:26.762697   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:26.819065   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:26.812332    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.812909    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.814580    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.815057    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.816557    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:26.812332    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.812909    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.814580    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.815057    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.816557    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:26.819088   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:26.819097   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:26.882784   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:26.882801   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:29.411957   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:29.422542   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:29.422590   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:29.448891   41166 cri.go:89] found id: ""
	I1009 18:30:29.448907   41166 logs.go:282] 0 containers: []
	W1009 18:30:29.448916   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:29.448921   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:29.448968   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:29.474806   41166 cri.go:89] found id: ""
	I1009 18:30:29.474823   41166 logs.go:282] 0 containers: []
	W1009 18:30:29.474829   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:29.474834   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:29.474875   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:29.501280   41166 cri.go:89] found id: ""
	I1009 18:30:29.501293   41166 logs.go:282] 0 containers: []
	W1009 18:30:29.501299   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:29.501303   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:29.501344   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:29.528191   41166 cri.go:89] found id: ""
	I1009 18:30:29.528204   41166 logs.go:282] 0 containers: []
	W1009 18:30:29.528210   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:29.528214   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:29.528253   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:29.554786   41166 cri.go:89] found id: ""
	I1009 18:30:29.554799   41166 logs.go:282] 0 containers: []
	W1009 18:30:29.554806   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:29.554811   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:29.554853   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:29.579893   41166 cri.go:89] found id: ""
	I1009 18:30:29.579909   41166 logs.go:282] 0 containers: []
	W1009 18:30:29.579918   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:29.579922   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:29.579965   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:29.605961   41166 cri.go:89] found id: ""
	I1009 18:30:29.605974   41166 logs.go:282] 0 containers: []
	W1009 18:30:29.605983   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:29.605998   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:29.606010   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:29.667811   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:29.667839   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:29.697600   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:29.697622   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:29.767295   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:29.767316   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:29.779348   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:29.779365   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:29.835961   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:29.829223    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.829767    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.831335    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.831758    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.833341    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:29.829223    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.829767    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.831335    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.831758    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.833341    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:32.337665   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:32.348466   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:32.348524   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:32.374886   41166 cri.go:89] found id: ""
	I1009 18:30:32.374904   41166 logs.go:282] 0 containers: []
	W1009 18:30:32.374914   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:32.374922   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:32.374970   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:32.400529   41166 cri.go:89] found id: ""
	I1009 18:30:32.400545   41166 logs.go:282] 0 containers: []
	W1009 18:30:32.400554   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:32.400560   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:32.400613   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:32.426791   41166 cri.go:89] found id: ""
	I1009 18:30:32.426807   41166 logs.go:282] 0 containers: []
	W1009 18:30:32.426812   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:32.426817   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:32.426857   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:32.452312   41166 cri.go:89] found id: ""
	I1009 18:30:32.452327   41166 logs.go:282] 0 containers: []
	W1009 18:30:32.452332   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:32.452337   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:32.452418   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:32.477378   41166 cri.go:89] found id: ""
	I1009 18:30:32.477392   41166 logs.go:282] 0 containers: []
	W1009 18:30:32.477398   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:32.477402   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:32.477445   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:32.503118   41166 cri.go:89] found id: ""
	I1009 18:30:32.503131   41166 logs.go:282] 0 containers: []
	W1009 18:30:32.503154   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:32.503161   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:32.503204   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:32.528118   41166 cri.go:89] found id: ""
	I1009 18:30:32.528132   41166 logs.go:282] 0 containers: []
	W1009 18:30:32.528156   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:32.528165   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:32.528175   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:32.591877   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:32.591893   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:32.603816   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:32.603831   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:32.660681   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:32.653480    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.654399    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.655963    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.656383    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.657937    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:32.653480    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.654399    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.655963    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.656383    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.657937    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:32.660698   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:32.660707   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:32.720544   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:32.720563   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:35.252168   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:35.262910   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:35.262957   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:35.288174   41166 cri.go:89] found id: ""
	I1009 18:30:35.288191   41166 logs.go:282] 0 containers: []
	W1009 18:30:35.288199   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:35.288205   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:35.288262   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:35.313498   41166 cri.go:89] found id: ""
	I1009 18:30:35.313515   41166 logs.go:282] 0 containers: []
	W1009 18:30:35.313523   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:35.313529   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:35.313576   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:35.337926   41166 cri.go:89] found id: ""
	I1009 18:30:35.337942   41166 logs.go:282] 0 containers: []
	W1009 18:30:35.337950   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:35.337956   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:35.337998   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:35.364071   41166 cri.go:89] found id: ""
	I1009 18:30:35.364085   41166 logs.go:282] 0 containers: []
	W1009 18:30:35.364093   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:35.364100   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:35.364185   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:35.390353   41166 cri.go:89] found id: ""
	I1009 18:30:35.390367   41166 logs.go:282] 0 containers: []
	W1009 18:30:35.390373   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:35.390378   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:35.390419   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:35.416164   41166 cri.go:89] found id: ""
	I1009 18:30:35.416179   41166 logs.go:282] 0 containers: []
	W1009 18:30:35.416185   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:35.416190   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:35.416230   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:35.442115   41166 cri.go:89] found id: ""
	I1009 18:30:35.442131   41166 logs.go:282] 0 containers: []
	W1009 18:30:35.442152   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:35.442161   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:35.442172   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:35.512407   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:35.512424   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:35.524233   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:35.524246   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:35.581940   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:35.574890    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.575447    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.577004    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.577533    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.579108    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:35.574890    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.575447    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.577004    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.577533    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.579108    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:35.581954   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:35.581963   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:35.645796   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:35.645815   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:38.176188   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:38.187286   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:38.187337   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:38.213431   41166 cri.go:89] found id: ""
	I1009 18:30:38.213447   41166 logs.go:282] 0 containers: []
	W1009 18:30:38.213454   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:38.213458   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:38.213506   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:38.239289   41166 cri.go:89] found id: ""
	I1009 18:30:38.239305   41166 logs.go:282] 0 containers: []
	W1009 18:30:38.239313   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:38.239322   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:38.239375   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:38.266575   41166 cri.go:89] found id: ""
	I1009 18:30:38.266590   41166 logs.go:282] 0 containers: []
	W1009 18:30:38.266599   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:38.266604   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:38.266659   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:38.293047   41166 cri.go:89] found id: ""
	I1009 18:30:38.293062   41166 logs.go:282] 0 containers: []
	W1009 18:30:38.293071   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:38.293077   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:38.293132   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:38.321467   41166 cri.go:89] found id: ""
	I1009 18:30:38.321483   41166 logs.go:282] 0 containers: []
	W1009 18:30:38.321497   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:38.321503   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:38.321550   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:38.348227   41166 cri.go:89] found id: ""
	I1009 18:30:38.348251   41166 logs.go:282] 0 containers: []
	W1009 18:30:38.348259   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:38.348263   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:38.348306   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:38.374014   41166 cri.go:89] found id: ""
	I1009 18:30:38.374027   41166 logs.go:282] 0 containers: []
	W1009 18:30:38.374033   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:38.374039   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:38.374049   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:38.402788   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:38.402802   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:38.467775   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:38.467793   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:38.479120   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:38.479133   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:38.534788   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:38.527716   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.528266   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.529835   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.530310   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.531921   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:38.527716   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.528266   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.529835   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.530310   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.531921   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:38.534798   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:38.534808   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:41.097400   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:41.108281   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:41.108326   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:41.134432   41166 cri.go:89] found id: ""
	I1009 18:30:41.134448   41166 logs.go:282] 0 containers: []
	W1009 18:30:41.134456   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:41.134461   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:41.134502   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:41.160000   41166 cri.go:89] found id: ""
	I1009 18:30:41.160045   41166 logs.go:282] 0 containers: []
	W1009 18:30:41.160055   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:41.160071   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:41.160116   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:41.185957   41166 cri.go:89] found id: ""
	I1009 18:30:41.185971   41166 logs.go:282] 0 containers: []
	W1009 18:30:41.185979   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:41.185985   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:41.186046   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:41.212581   41166 cri.go:89] found id: ""
	I1009 18:30:41.212595   41166 logs.go:282] 0 containers: []
	W1009 18:30:41.212604   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:41.212611   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:41.212664   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:41.239537   41166 cri.go:89] found id: ""
	I1009 18:30:41.239550   41166 logs.go:282] 0 containers: []
	W1009 18:30:41.239556   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:41.239560   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:41.239603   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:41.264876   41166 cri.go:89] found id: ""
	I1009 18:30:41.264891   41166 logs.go:282] 0 containers: []
	W1009 18:30:41.264906   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:41.264915   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:41.264961   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:41.293949   41166 cri.go:89] found id: ""
	I1009 18:30:41.293962   41166 logs.go:282] 0 containers: []
	W1009 18:30:41.293968   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:41.293975   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:41.293985   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:41.306008   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:41.306023   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:41.363715   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:41.356554   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.357179   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.358764   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.359246   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.361018   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:41.356554   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.357179   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.358764   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.359246   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.361018   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:41.363727   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:41.363736   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:41.427974   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:41.427993   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:41.457063   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:41.457080   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:44.027395   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:44.038545   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:44.038600   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:44.065345   41166 cri.go:89] found id: ""
	I1009 18:30:44.065358   41166 logs.go:282] 0 containers: []
	W1009 18:30:44.065364   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:44.065369   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:44.065418   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:44.092543   41166 cri.go:89] found id: ""
	I1009 18:30:44.092558   41166 logs.go:282] 0 containers: []
	W1009 18:30:44.092572   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:44.092578   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:44.092628   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:44.117582   41166 cri.go:89] found id: ""
	I1009 18:30:44.117598   41166 logs.go:282] 0 containers: []
	W1009 18:30:44.117606   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:44.117612   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:44.117663   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:44.144537   41166 cri.go:89] found id: ""
	I1009 18:30:44.144554   41166 logs.go:282] 0 containers: []
	W1009 18:30:44.144563   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:44.144569   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:44.144630   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:44.170004   41166 cri.go:89] found id: ""
	I1009 18:30:44.170020   41166 logs.go:282] 0 containers: []
	W1009 18:30:44.170027   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:44.170032   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:44.170085   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:44.195566   41166 cri.go:89] found id: ""
	I1009 18:30:44.195581   41166 logs.go:282] 0 containers: []
	W1009 18:30:44.195587   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:44.195591   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:44.195638   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:44.221237   41166 cri.go:89] found id: ""
	I1009 18:30:44.221250   41166 logs.go:282] 0 containers: []
	W1009 18:30:44.221256   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:44.221264   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:44.221273   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:44.290040   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:44.290059   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:44.301528   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:44.301543   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:44.356883   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:44.350018   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.350577   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.352116   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.352527   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.353985   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:44.350018   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.350577   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.352116   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.352527   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.353985   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:44.356892   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:44.356904   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:44.421203   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:44.421220   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:46.952072   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:46.962761   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:46.962852   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:46.988381   41166 cri.go:89] found id: ""
	I1009 18:30:46.988395   41166 logs.go:282] 0 containers: []
	W1009 18:30:46.988401   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:46.988406   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:46.988447   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:47.014123   41166 cri.go:89] found id: ""
	I1009 18:30:47.014151   41166 logs.go:282] 0 containers: []
	W1009 18:30:47.014161   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:47.014167   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:47.014223   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:47.040379   41166 cri.go:89] found id: ""
	I1009 18:30:47.040395   41166 logs.go:282] 0 containers: []
	W1009 18:30:47.040403   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:47.040409   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:47.040460   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:47.066430   41166 cri.go:89] found id: ""
	I1009 18:30:47.066444   41166 logs.go:282] 0 containers: []
	W1009 18:30:47.066450   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:47.066454   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:47.066495   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:47.092458   41166 cri.go:89] found id: ""
	I1009 18:30:47.092471   41166 logs.go:282] 0 containers: []
	W1009 18:30:47.092476   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:47.092481   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:47.092522   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:47.118558   41166 cri.go:89] found id: ""
	I1009 18:30:47.118574   41166 logs.go:282] 0 containers: []
	W1009 18:30:47.118582   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:47.118588   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:47.118639   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:47.143956   41166 cri.go:89] found id: ""
	I1009 18:30:47.143969   41166 logs.go:282] 0 containers: []
	W1009 18:30:47.143975   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:47.143983   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:47.143991   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:47.204921   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:47.204939   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:47.233955   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:47.233972   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:47.299659   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:47.299725   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:47.310930   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:47.310944   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:47.365782   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:47.358862   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.359473   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.361059   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.361558   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.363067   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:47.358862   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.359473   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.361059   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.361558   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.363067   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:49.866821   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:49.877492   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:49.877546   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:49.902235   41166 cri.go:89] found id: ""
	I1009 18:30:49.902249   41166 logs.go:282] 0 containers: []
	W1009 18:30:49.902255   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:49.902260   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:49.902330   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:49.927833   41166 cri.go:89] found id: ""
	I1009 18:30:49.927848   41166 logs.go:282] 0 containers: []
	W1009 18:30:49.927855   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:49.927859   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:49.927914   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:49.952484   41166 cri.go:89] found id: ""
	I1009 18:30:49.952500   41166 logs.go:282] 0 containers: []
	W1009 18:30:49.952515   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:49.952525   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:49.952653   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:49.978974   41166 cri.go:89] found id: ""
	I1009 18:30:49.978989   41166 logs.go:282] 0 containers: []
	W1009 18:30:49.978997   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:49.979003   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:49.979055   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:50.003996   41166 cri.go:89] found id: ""
	I1009 18:30:50.004011   41166 logs.go:282] 0 containers: []
	W1009 18:30:50.004020   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:50.004026   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:50.004074   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:50.029201   41166 cri.go:89] found id: ""
	I1009 18:30:50.029213   41166 logs.go:282] 0 containers: []
	W1009 18:30:50.029220   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:50.029225   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:50.029285   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:50.055190   41166 cri.go:89] found id: ""
	I1009 18:30:50.055203   41166 logs.go:282] 0 containers: []
	W1009 18:30:50.055208   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:50.055215   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:50.055224   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:50.124075   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:50.124092   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:50.135918   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:50.135933   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:50.192425   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:50.185538   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.186038   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.187643   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.188060   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.189680   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:50.185538   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.186038   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.187643   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.188060   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.189680   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:50.192437   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:50.192450   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:50.252346   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:50.252364   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:52.781770   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:52.792376   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:52.792418   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:52.818902   41166 cri.go:89] found id: ""
	I1009 18:30:52.818916   41166 logs.go:282] 0 containers: []
	W1009 18:30:52.818922   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:52.818941   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:52.818984   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:52.844120   41166 cri.go:89] found id: ""
	I1009 18:30:52.844145   41166 logs.go:282] 0 containers: []
	W1009 18:30:52.844154   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:52.844160   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:52.844205   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:52.870228   41166 cri.go:89] found id: ""
	I1009 18:30:52.870242   41166 logs.go:282] 0 containers: []
	W1009 18:30:52.870254   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:52.870259   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:52.870305   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:52.896056   41166 cri.go:89] found id: ""
	I1009 18:30:52.896073   41166 logs.go:282] 0 containers: []
	W1009 18:30:52.896082   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:52.896089   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:52.896151   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:52.921111   41166 cri.go:89] found id: ""
	I1009 18:30:52.921126   41166 logs.go:282] 0 containers: []
	W1009 18:30:52.921145   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:52.921152   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:52.921198   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:52.947164   41166 cri.go:89] found id: ""
	I1009 18:30:52.947180   41166 logs.go:282] 0 containers: []
	W1009 18:30:52.947189   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:52.947194   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:52.947246   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:52.972398   41166 cri.go:89] found id: ""
	I1009 18:30:52.972412   41166 logs.go:282] 0 containers: []
	W1009 18:30:52.972419   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:52.972426   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:52.972441   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:53.041501   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:53.041519   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:53.053308   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:53.053324   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:53.109333   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:53.102407   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.102951   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.104551   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.104933   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.106568   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:53.102407   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.102951   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.104551   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.104933   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.106568   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:53.109342   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:53.109351   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:53.168700   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:53.168718   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:55.699434   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:55.709814   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:55.709854   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:55.734822   41166 cri.go:89] found id: ""
	I1009 18:30:55.734841   41166 logs.go:282] 0 containers: []
	W1009 18:30:55.734851   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:55.734858   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:55.734916   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:55.759667   41166 cri.go:89] found id: ""
	I1009 18:30:55.759684   41166 logs.go:282] 0 containers: []
	W1009 18:30:55.759692   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:55.759698   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:55.759750   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:55.785789   41166 cri.go:89] found id: ""
	I1009 18:30:55.785805   41166 logs.go:282] 0 containers: []
	W1009 18:30:55.785813   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:55.785819   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:55.785872   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:55.810465   41166 cri.go:89] found id: ""
	I1009 18:30:55.810481   41166 logs.go:282] 0 containers: []
	W1009 18:30:55.810490   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:55.810496   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:55.810537   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:55.836067   41166 cri.go:89] found id: ""
	I1009 18:30:55.836080   41166 logs.go:282] 0 containers: []
	W1009 18:30:55.836086   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:55.836091   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:55.836131   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:55.860951   41166 cri.go:89] found id: ""
	I1009 18:30:55.860967   41166 logs.go:282] 0 containers: []
	W1009 18:30:55.860974   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:55.860978   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:55.861021   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:55.885761   41166 cri.go:89] found id: ""
	I1009 18:30:55.885775   41166 logs.go:282] 0 containers: []
	W1009 18:30:55.885781   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:55.885788   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:55.885797   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:55.915265   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:55.915280   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:55.981115   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:55.981146   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:55.993311   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:55.993328   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:56.050751   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:56.043889   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.044374   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.045969   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.046413   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.047907   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:56.043889   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.044374   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.045969   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.046413   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.047907   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:56.050764   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:56.050774   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:58.612432   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:58.623245   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:58.623295   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:58.648116   41166 cri.go:89] found id: ""
	I1009 18:30:58.648129   41166 logs.go:282] 0 containers: []
	W1009 18:30:58.648149   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:58.648156   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:58.648209   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:58.674600   41166 cri.go:89] found id: ""
	I1009 18:30:58.674619   41166 logs.go:282] 0 containers: []
	W1009 18:30:58.674627   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:58.674634   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:58.674700   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:58.700636   41166 cri.go:89] found id: ""
	I1009 18:30:58.700649   41166 logs.go:282] 0 containers: []
	W1009 18:30:58.700655   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:58.700659   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:58.700701   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:58.725891   41166 cri.go:89] found id: ""
	I1009 18:30:58.725907   41166 logs.go:282] 0 containers: []
	W1009 18:30:58.725916   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:58.725922   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:58.725984   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:58.751493   41166 cri.go:89] found id: ""
	I1009 18:30:58.751509   41166 logs.go:282] 0 containers: []
	W1009 18:30:58.751517   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:58.751523   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:58.751565   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:58.776578   41166 cri.go:89] found id: ""
	I1009 18:30:58.776594   41166 logs.go:282] 0 containers: []
	W1009 18:30:58.776603   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:58.776609   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:58.776668   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:58.802746   41166 cri.go:89] found id: ""
	I1009 18:30:58.802759   41166 logs.go:282] 0 containers: []
	W1009 18:30:58.802765   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:58.802772   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:58.802780   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:58.871392   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:58.871409   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:58.883200   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:58.883216   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:58.939993   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:58.932935   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.933540   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.935122   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.935618   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.937106   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:58.932935   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.933540   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.935122   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.935618   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.937106   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:58.940010   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:58.940026   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:59.001043   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:59.001062   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:01.533754   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:01.544314   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:01.544360   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:01.570557   41166 cri.go:89] found id: ""
	I1009 18:31:01.570573   41166 logs.go:282] 0 containers: []
	W1009 18:31:01.570581   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:01.570587   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:01.570633   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:01.597498   41166 cri.go:89] found id: ""
	I1009 18:31:01.597512   41166 logs.go:282] 0 containers: []
	W1009 18:31:01.597518   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:01.597522   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:01.597562   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:01.624834   41166 cri.go:89] found id: ""
	I1009 18:31:01.624850   41166 logs.go:282] 0 containers: []
	W1009 18:31:01.624859   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:01.624865   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:01.624928   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:01.650834   41166 cri.go:89] found id: ""
	I1009 18:31:01.650849   41166 logs.go:282] 0 containers: []
	W1009 18:31:01.650858   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:01.650864   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:01.650902   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:01.676498   41166 cri.go:89] found id: ""
	I1009 18:31:01.676513   41166 logs.go:282] 0 containers: []
	W1009 18:31:01.676522   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:01.676530   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:01.676575   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:01.702274   41166 cri.go:89] found id: ""
	I1009 18:31:01.702288   41166 logs.go:282] 0 containers: []
	W1009 18:31:01.702299   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:01.702304   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:01.702359   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:01.727077   41166 cri.go:89] found id: ""
	I1009 18:31:01.727089   41166 logs.go:282] 0 containers: []
	W1009 18:31:01.727095   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:01.727102   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:01.727110   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:01.794867   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:01.794884   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:01.807132   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:01.807156   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:01.863186   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:01.856581   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.857195   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.858743   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.859211   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.860783   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:01.856581   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.857195   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.858743   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.859211   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.860783   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:01.863194   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:01.863203   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:01.926319   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:01.926337   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:04.456429   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:04.467647   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:04.467697   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:04.494363   41166 cri.go:89] found id: ""
	I1009 18:31:04.494376   41166 logs.go:282] 0 containers: []
	W1009 18:31:04.494382   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:04.494386   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:04.494425   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:04.519597   41166 cri.go:89] found id: ""
	I1009 18:31:04.519613   41166 logs.go:282] 0 containers: []
	W1009 18:31:04.519622   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:04.519627   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:04.519673   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:04.544960   41166 cri.go:89] found id: ""
	I1009 18:31:04.544973   41166 logs.go:282] 0 containers: []
	W1009 18:31:04.544979   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:04.544983   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:04.545025   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:04.570312   41166 cri.go:89] found id: ""
	I1009 18:31:04.570326   41166 logs.go:282] 0 containers: []
	W1009 18:31:04.570331   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:04.570336   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:04.570376   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:04.598075   41166 cri.go:89] found id: ""
	I1009 18:31:04.598088   41166 logs.go:282] 0 containers: []
	W1009 18:31:04.598094   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:04.598098   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:04.598163   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:04.624439   41166 cri.go:89] found id: ""
	I1009 18:31:04.624452   41166 logs.go:282] 0 containers: []
	W1009 18:31:04.624458   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:04.624462   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:04.624501   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:04.650512   41166 cri.go:89] found id: ""
	I1009 18:31:04.650526   41166 logs.go:282] 0 containers: []
	W1009 18:31:04.650535   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:04.650542   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:04.650550   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:04.721753   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:04.721770   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:04.733512   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:04.733526   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:04.789859   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:04.782731   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.783273   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.784877   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.785331   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.786824   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:04.782731   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.783273   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.784877   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.785331   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.786824   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:04.789871   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:04.789881   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:04.853995   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:04.854014   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:07.383979   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:07.395090   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:07.395190   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:07.421890   41166 cri.go:89] found id: ""
	I1009 18:31:07.421903   41166 logs.go:282] 0 containers: []
	W1009 18:31:07.421909   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:07.421914   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:07.421966   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:07.448060   41166 cri.go:89] found id: ""
	I1009 18:31:07.448073   41166 logs.go:282] 0 containers: []
	W1009 18:31:07.448079   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:07.448083   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:07.448124   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:07.474470   41166 cri.go:89] found id: ""
	I1009 18:31:07.474482   41166 logs.go:282] 0 containers: []
	W1009 18:31:07.474488   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:07.474493   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:07.474536   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:07.501777   41166 cri.go:89] found id: ""
	I1009 18:31:07.501793   41166 logs.go:282] 0 containers: []
	W1009 18:31:07.501802   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:07.501808   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:07.501851   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:07.527522   41166 cri.go:89] found id: ""
	I1009 18:31:07.527534   41166 logs.go:282] 0 containers: []
	W1009 18:31:07.527540   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:07.527545   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:07.527597   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:07.552279   41166 cri.go:89] found id: ""
	I1009 18:31:07.552294   41166 logs.go:282] 0 containers: []
	W1009 18:31:07.552302   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:07.552307   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:07.552346   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:07.576431   41166 cri.go:89] found id: ""
	I1009 18:31:07.576446   41166 logs.go:282] 0 containers: []
	W1009 18:31:07.576454   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:07.576462   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:07.576470   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:07.643680   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:07.643696   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:07.655497   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:07.655511   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:07.710565   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:07.703625   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.704548   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.706134   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.706591   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.708100   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:07.703625   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.704548   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.706134   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.706591   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.708100   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:07.710581   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:07.710591   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:07.772201   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:07.772218   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:10.301414   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:10.312068   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:10.312119   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:10.336646   41166 cri.go:89] found id: ""
	I1009 18:31:10.336661   41166 logs.go:282] 0 containers: []
	W1009 18:31:10.336668   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:10.336672   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:10.336714   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:10.361765   41166 cri.go:89] found id: ""
	I1009 18:31:10.361779   41166 logs.go:282] 0 containers: []
	W1009 18:31:10.361788   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:10.361793   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:10.361849   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:10.386638   41166 cri.go:89] found id: ""
	I1009 18:31:10.386654   41166 logs.go:282] 0 containers: []
	W1009 18:31:10.386663   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:10.386669   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:10.386715   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:10.412340   41166 cri.go:89] found id: ""
	I1009 18:31:10.412353   41166 logs.go:282] 0 containers: []
	W1009 18:31:10.412359   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:10.412363   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:10.412402   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:10.437345   41166 cri.go:89] found id: ""
	I1009 18:31:10.437360   41166 logs.go:282] 0 containers: []
	W1009 18:31:10.437368   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:10.437372   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:10.437412   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:10.461775   41166 cri.go:89] found id: ""
	I1009 18:31:10.461790   41166 logs.go:282] 0 containers: []
	W1009 18:31:10.461797   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:10.461804   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:10.461851   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:10.486502   41166 cri.go:89] found id: ""
	I1009 18:31:10.486515   41166 logs.go:282] 0 containers: []
	W1009 18:31:10.486521   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:10.486528   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:10.486540   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:10.541525   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:10.534617   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.535191   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.536754   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.537206   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.538626   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:10.534617   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.535191   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.536754   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.537206   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.538626   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:10.541534   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:10.541543   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:10.605554   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:10.605573   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:10.633218   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:10.633233   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:10.698623   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:10.698640   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:13.212017   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:13.222887   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:13.222934   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:13.249527   41166 cri.go:89] found id: ""
	I1009 18:31:13.249545   41166 logs.go:282] 0 containers: []
	W1009 18:31:13.249553   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:13.249558   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:13.249613   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:13.276030   41166 cri.go:89] found id: ""
	I1009 18:31:13.276047   41166 logs.go:282] 0 containers: []
	W1009 18:31:13.276055   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:13.276062   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:13.276123   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:13.301696   41166 cri.go:89] found id: ""
	I1009 18:31:13.301712   41166 logs.go:282] 0 containers: []
	W1009 18:31:13.301722   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:13.301728   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:13.301779   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:13.327279   41166 cri.go:89] found id: ""
	I1009 18:31:13.327297   41166 logs.go:282] 0 containers: []
	W1009 18:31:13.327305   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:13.327314   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:13.327376   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:13.352370   41166 cri.go:89] found id: ""
	I1009 18:31:13.352387   41166 logs.go:282] 0 containers: []
	W1009 18:31:13.352396   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:13.352404   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:13.352455   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:13.376705   41166 cri.go:89] found id: ""
	I1009 18:31:13.376718   41166 logs.go:282] 0 containers: []
	W1009 18:31:13.376724   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:13.376728   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:13.376769   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:13.401874   41166 cri.go:89] found id: ""
	I1009 18:31:13.401887   41166 logs.go:282] 0 containers: []
	W1009 18:31:13.401893   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:13.401899   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:13.401908   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:13.468065   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:13.468083   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:13.479819   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:13.479833   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:13.536357   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:13.528543   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.529016   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.530652   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.532160   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.532602   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:13.528543   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.529016   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.530652   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.532160   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.532602   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:13.536371   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:13.536385   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:13.595534   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:13.595552   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:16.124813   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:16.135558   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:16.135630   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:16.161632   41166 cri.go:89] found id: ""
	I1009 18:31:16.161649   41166 logs.go:282] 0 containers: []
	W1009 18:31:16.161657   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:16.161662   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:16.161706   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:16.187466   41166 cri.go:89] found id: ""
	I1009 18:31:16.187480   41166 logs.go:282] 0 containers: []
	W1009 18:31:16.187486   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:16.187491   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:16.187532   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:16.214699   41166 cri.go:89] found id: ""
	I1009 18:31:16.214712   41166 logs.go:282] 0 containers: []
	W1009 18:31:16.214718   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:16.214722   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:16.214772   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:16.241600   41166 cri.go:89] found id: ""
	I1009 18:31:16.241617   41166 logs.go:282] 0 containers: []
	W1009 18:31:16.241622   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:16.241627   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:16.241670   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:16.266065   41166 cri.go:89] found id: ""
	I1009 18:31:16.266082   41166 logs.go:282] 0 containers: []
	W1009 18:31:16.266091   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:16.266097   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:16.266158   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:16.291053   41166 cri.go:89] found id: ""
	I1009 18:31:16.291067   41166 logs.go:282] 0 containers: []
	W1009 18:31:16.291073   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:16.291077   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:16.291123   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:16.316037   41166 cri.go:89] found id: ""
	I1009 18:31:16.316053   41166 logs.go:282] 0 containers: []
	W1009 18:31:16.316058   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:16.316065   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:16.316075   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:16.374518   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:16.374537   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:16.403805   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:16.403890   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:16.472344   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:16.472362   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:16.483905   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:16.483921   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:16.539056   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:16.532081   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.532735   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.534334   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.534743   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.536309   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:16.532081   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.532735   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.534334   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.534743   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.536309   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:19.039513   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:19.050212   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:19.050255   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:19.074802   41166 cri.go:89] found id: ""
	I1009 18:31:19.074819   41166 logs.go:282] 0 containers: []
	W1009 18:31:19.074828   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:19.074834   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:19.074879   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:19.101554   41166 cri.go:89] found id: ""
	I1009 18:31:19.101568   41166 logs.go:282] 0 containers: []
	W1009 18:31:19.101574   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:19.101579   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:19.101618   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:19.126592   41166 cri.go:89] found id: ""
	I1009 18:31:19.126604   41166 logs.go:282] 0 containers: []
	W1009 18:31:19.126610   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:19.126614   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:19.126652   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:19.151096   41166 cri.go:89] found id: ""
	I1009 18:31:19.151108   41166 logs.go:282] 0 containers: []
	W1009 18:31:19.151117   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:19.151124   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:19.151179   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:19.175712   41166 cri.go:89] found id: ""
	I1009 18:31:19.175730   41166 logs.go:282] 0 containers: []
	W1009 18:31:19.175736   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:19.175740   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:19.175781   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:19.200064   41166 cri.go:89] found id: ""
	I1009 18:31:19.200080   41166 logs.go:282] 0 containers: []
	W1009 18:31:19.200088   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:19.200094   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:19.200161   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:19.227391   41166 cri.go:89] found id: ""
	I1009 18:31:19.227406   41166 logs.go:282] 0 containers: []
	W1009 18:31:19.227414   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:19.227424   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:19.227434   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:19.289413   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:19.289430   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:19.318081   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:19.318095   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:19.387739   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:19.387754   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:19.399028   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:19.399046   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:19.454538   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:19.447438   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.447971   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.449548   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.449995   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.451532   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:19.447438   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.447971   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.449548   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.449995   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.451532   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:21.956227   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:21.966936   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:21.966995   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:21.991378   41166 cri.go:89] found id: ""
	I1009 18:31:21.991391   41166 logs.go:282] 0 containers: []
	W1009 18:31:21.991397   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:21.991402   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:21.991440   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:22.016783   41166 cri.go:89] found id: ""
	I1009 18:31:22.016796   41166 logs.go:282] 0 containers: []
	W1009 18:31:22.016803   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:22.016808   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:22.016848   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:22.041987   41166 cri.go:89] found id: ""
	I1009 18:31:22.042003   41166 logs.go:282] 0 containers: []
	W1009 18:31:22.042012   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:22.042018   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:22.042068   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:22.067709   41166 cri.go:89] found id: ""
	I1009 18:31:22.067722   41166 logs.go:282] 0 containers: []
	W1009 18:31:22.067727   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:22.067735   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:22.067787   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:22.093654   41166 cri.go:89] found id: ""
	I1009 18:31:22.093666   41166 logs.go:282] 0 containers: []
	W1009 18:31:22.093671   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:22.093675   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:22.093718   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:22.119263   41166 cri.go:89] found id: ""
	I1009 18:31:22.119276   41166 logs.go:282] 0 containers: []
	W1009 18:31:22.119306   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:22.119310   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:22.119350   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:22.143920   41166 cri.go:89] found id: ""
	I1009 18:31:22.143933   41166 logs.go:282] 0 containers: []
	W1009 18:31:22.143939   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:22.143945   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:22.143954   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:22.172713   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:22.172727   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:22.241689   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:22.241717   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:22.253927   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:22.253942   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:22.308454   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:22.301618   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.302105   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.303689   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.304160   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.305712   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:22.301618   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.302105   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.303689   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.304160   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.305712   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:22.308469   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:22.308483   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:24.874240   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:24.885199   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:24.885251   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:24.912332   41166 cri.go:89] found id: ""
	I1009 18:31:24.912355   41166 logs.go:282] 0 containers: []
	W1009 18:31:24.912363   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:24.912369   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:24.912510   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:24.938534   41166 cri.go:89] found id: ""
	I1009 18:31:24.938551   41166 logs.go:282] 0 containers: []
	W1009 18:31:24.938557   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:24.938564   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:24.938611   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:24.965113   41166 cri.go:89] found id: ""
	I1009 18:31:24.965125   41166 logs.go:282] 0 containers: []
	W1009 18:31:24.965131   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:24.965151   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:24.965204   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:24.991845   41166 cri.go:89] found id: ""
	I1009 18:31:24.991858   41166 logs.go:282] 0 containers: []
	W1009 18:31:24.991864   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:24.991868   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:24.991910   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:25.018693   41166 cri.go:89] found id: ""
	I1009 18:31:25.018706   41166 logs.go:282] 0 containers: []
	W1009 18:31:25.018711   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:25.018717   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:25.018756   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:25.044931   41166 cri.go:89] found id: ""
	I1009 18:31:25.044948   41166 logs.go:282] 0 containers: []
	W1009 18:31:25.044957   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:25.044963   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:25.045014   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:25.071449   41166 cri.go:89] found id: ""
	I1009 18:31:25.071465   41166 logs.go:282] 0 containers: []
	W1009 18:31:25.071474   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:25.071483   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:25.071495   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:25.138301   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:25.138320   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:25.150561   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:25.150575   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:25.208095   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:25.201000   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.201519   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.203190   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.203673   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.205213   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:25.201000   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.201519   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.203190   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.203673   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.205213   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:25.208105   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:25.208114   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:25.272810   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:25.272829   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:27.804229   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:27.815074   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:27.815120   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:27.840171   41166 cri.go:89] found id: ""
	I1009 18:31:27.840188   41166 logs.go:282] 0 containers: []
	W1009 18:31:27.840196   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:27.840200   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:27.840274   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:27.866963   41166 cri.go:89] found id: ""
	I1009 18:31:27.866981   41166 logs.go:282] 0 containers: []
	W1009 18:31:27.866990   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:27.866996   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:27.867076   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:27.893152   41166 cri.go:89] found id: ""
	I1009 18:31:27.893169   41166 logs.go:282] 0 containers: []
	W1009 18:31:27.893177   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:27.893183   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:27.893235   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:27.920337   41166 cri.go:89] found id: ""
	I1009 18:31:27.920350   41166 logs.go:282] 0 containers: []
	W1009 18:31:27.920356   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:27.920361   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:27.920403   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:27.945940   41166 cri.go:89] found id: ""
	I1009 18:31:27.945956   41166 logs.go:282] 0 containers: []
	W1009 18:31:27.945964   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:27.945971   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:27.946036   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:27.971578   41166 cri.go:89] found id: ""
	I1009 18:31:27.971594   41166 logs.go:282] 0 containers: []
	W1009 18:31:27.971600   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:27.971604   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:27.971651   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:27.998876   41166 cri.go:89] found id: ""
	I1009 18:31:27.998890   41166 logs.go:282] 0 containers: []
	W1009 18:31:27.998898   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:27.998907   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:27.998919   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:28.060031   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:28.060050   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:28.090280   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:28.090294   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:28.155986   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:28.156004   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:28.167898   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:28.167912   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:28.224480   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:28.217373   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.217904   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.219580   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.219973   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.221548   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:28.217373   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.217904   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.219580   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.219973   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.221548   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:30.726158   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:30.736658   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:30.736713   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:30.762096   41166 cri.go:89] found id: ""
	I1009 18:31:30.762111   41166 logs.go:282] 0 containers: []
	W1009 18:31:30.762119   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:30.762125   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:30.762193   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:30.787132   41166 cri.go:89] found id: ""
	I1009 18:31:30.787161   41166 logs.go:282] 0 containers: []
	W1009 18:31:30.787169   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:30.787175   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:30.787234   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:30.813496   41166 cri.go:89] found id: ""
	I1009 18:31:30.813510   41166 logs.go:282] 0 containers: []
	W1009 18:31:30.813515   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:30.813519   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:30.813558   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:30.838073   41166 cri.go:89] found id: ""
	I1009 18:31:30.838089   41166 logs.go:282] 0 containers: []
	W1009 18:31:30.838098   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:30.838104   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:30.838167   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:30.864286   41166 cri.go:89] found id: ""
	I1009 18:31:30.864301   41166 logs.go:282] 0 containers: []
	W1009 18:31:30.864307   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:30.864312   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:30.864353   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:30.890806   41166 cri.go:89] found id: ""
	I1009 18:31:30.890819   41166 logs.go:282] 0 containers: []
	W1009 18:31:30.890825   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:30.890830   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:30.890885   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:30.917461   41166 cri.go:89] found id: ""
	I1009 18:31:30.917474   41166 logs.go:282] 0 containers: []
	W1009 18:31:30.917480   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:30.917487   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:30.917496   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:30.947122   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:30.947157   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:31.013114   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:31.013130   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:31.025904   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:31.025924   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:31.081194   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:31.074116   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.074697   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.076284   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.076747   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.078298   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:31.074116   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.074697   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.076284   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.076747   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.078298   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:31.081206   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:31.081217   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:33.641553   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:33.652051   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:33.652105   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:33.676453   41166 cri.go:89] found id: ""
	I1009 18:31:33.676467   41166 logs.go:282] 0 containers: []
	W1009 18:31:33.676473   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:33.676477   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:33.676517   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:33.701838   41166 cri.go:89] found id: ""
	I1009 18:31:33.701854   41166 logs.go:282] 0 containers: []
	W1009 18:31:33.701862   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:33.701868   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:33.701916   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:33.727771   41166 cri.go:89] found id: ""
	I1009 18:31:33.727787   41166 logs.go:282] 0 containers: []
	W1009 18:31:33.727794   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:33.727799   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:33.727839   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:33.753654   41166 cri.go:89] found id: ""
	I1009 18:31:33.753670   41166 logs.go:282] 0 containers: []
	W1009 18:31:33.753681   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:33.753686   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:33.753731   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:33.780405   41166 cri.go:89] found id: ""
	I1009 18:31:33.780421   41166 logs.go:282] 0 containers: []
	W1009 18:31:33.780430   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:33.780436   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:33.780477   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:33.807435   41166 cri.go:89] found id: ""
	I1009 18:31:33.807448   41166 logs.go:282] 0 containers: []
	W1009 18:31:33.807454   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:33.807458   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:33.807502   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:33.833608   41166 cri.go:89] found id: ""
	I1009 18:31:33.833625   41166 logs.go:282] 0 containers: []
	W1009 18:31:33.833633   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:33.833642   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:33.833655   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:33.900086   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:33.900106   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:33.912409   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:33.912429   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:33.968532   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:33.961720   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.962278   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.963911   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.964427   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.965875   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:33.961720   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.962278   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.963911   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.964427   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.965875   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:33.968541   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:33.968551   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:34.031879   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:34.031899   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:36.563728   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:36.574356   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:36.574399   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:36.600194   41166 cri.go:89] found id: ""
	I1009 18:31:36.600209   41166 logs.go:282] 0 containers: []
	W1009 18:31:36.600217   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:36.600223   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:36.600284   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:36.626075   41166 cri.go:89] found id: ""
	I1009 18:31:36.626096   41166 logs.go:282] 0 containers: []
	W1009 18:31:36.626106   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:36.626111   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:36.626182   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:36.652078   41166 cri.go:89] found id: ""
	I1009 18:31:36.652098   41166 logs.go:282] 0 containers: []
	W1009 18:31:36.652104   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:36.652109   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:36.652170   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:36.677462   41166 cri.go:89] found id: ""
	I1009 18:31:36.677474   41166 logs.go:282] 0 containers: []
	W1009 18:31:36.677480   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:36.677484   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:36.677522   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:36.703778   41166 cri.go:89] found id: ""
	I1009 18:31:36.703793   41166 logs.go:282] 0 containers: []
	W1009 18:31:36.703801   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:36.703807   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:36.703856   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:36.729868   41166 cri.go:89] found id: ""
	I1009 18:31:36.729884   41166 logs.go:282] 0 containers: []
	W1009 18:31:36.729893   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:36.729899   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:36.729942   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:36.756775   41166 cri.go:89] found id: ""
	I1009 18:31:36.756787   41166 logs.go:282] 0 containers: []
	W1009 18:31:36.756793   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:36.756801   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:36.756810   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:36.826838   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:36.826854   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:36.838705   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:36.838718   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:36.894816   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:36.887889   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.888440   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.890010   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.890538   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.891994   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:36.887889   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.888440   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.890010   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.890538   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.891994   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:36.894826   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:36.894838   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:36.959865   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:36.959882   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:39.490368   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:39.501284   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:39.501335   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:39.527003   41166 cri.go:89] found id: ""
	I1009 18:31:39.527016   41166 logs.go:282] 0 containers: []
	W1009 18:31:39.527022   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:39.527026   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:39.527071   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:39.553355   41166 cri.go:89] found id: ""
	I1009 18:31:39.553370   41166 logs.go:282] 0 containers: []
	W1009 18:31:39.553379   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:39.553384   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:39.553425   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:39.579105   41166 cri.go:89] found id: ""
	I1009 18:31:39.579121   41166 logs.go:282] 0 containers: []
	W1009 18:31:39.579128   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:39.579133   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:39.579203   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:39.604899   41166 cri.go:89] found id: ""
	I1009 18:31:39.604913   41166 logs.go:282] 0 containers: []
	W1009 18:31:39.604919   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:39.604928   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:39.604985   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:39.630635   41166 cri.go:89] found id: ""
	I1009 18:31:39.630647   41166 logs.go:282] 0 containers: []
	W1009 18:31:39.630653   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:39.630657   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:39.630701   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:39.656106   41166 cri.go:89] found id: ""
	I1009 18:31:39.656121   41166 logs.go:282] 0 containers: []
	W1009 18:31:39.656129   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:39.656148   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:39.656207   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:39.681655   41166 cri.go:89] found id: ""
	I1009 18:31:39.681667   41166 logs.go:282] 0 containers: []
	W1009 18:31:39.681673   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:39.681680   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:39.681688   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:39.744126   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:39.744152   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:39.772799   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:39.772812   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:39.844571   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:39.844590   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:39.856246   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:39.856263   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:39.911854   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:39.905117   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.905586   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.907188   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.907677   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.909231   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:39.905117   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.905586   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.907188   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.907677   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.909231   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:42.413528   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:42.424343   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:42.424407   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:42.450128   41166 cri.go:89] found id: ""
	I1009 18:31:42.450165   41166 logs.go:282] 0 containers: []
	W1009 18:31:42.450173   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:42.450180   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:42.450239   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:42.475946   41166 cri.go:89] found id: ""
	I1009 18:31:42.475961   41166 logs.go:282] 0 containers: []
	W1009 18:31:42.475970   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:42.475976   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:42.476031   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:42.502865   41166 cri.go:89] found id: ""
	I1009 18:31:42.502881   41166 logs.go:282] 0 containers: []
	W1009 18:31:42.502890   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:42.502896   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:42.502946   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:42.530798   41166 cri.go:89] found id: ""
	I1009 18:31:42.530814   41166 logs.go:282] 0 containers: []
	W1009 18:31:42.530823   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:42.530829   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:42.530879   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:42.556524   41166 cri.go:89] found id: ""
	I1009 18:31:42.556539   41166 logs.go:282] 0 containers: []
	W1009 18:31:42.556548   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:42.556554   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:42.556605   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:42.582936   41166 cri.go:89] found id: ""
	I1009 18:31:42.582953   41166 logs.go:282] 0 containers: []
	W1009 18:31:42.582961   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:42.582967   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:42.583055   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:42.609400   41166 cri.go:89] found id: ""
	I1009 18:31:42.609415   41166 logs.go:282] 0 containers: []
	W1009 18:31:42.609424   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:42.609433   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:42.609444   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:42.671451   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:42.671468   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:42.700813   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:42.700832   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:42.769841   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:42.769859   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:42.782244   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:42.782261   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:42.840011   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:42.832755   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.833376   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.834917   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.835376   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.836976   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:42.832755   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.833376   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.834917   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.835376   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.836976   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:45.340705   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:45.350991   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:45.351034   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:45.375913   41166 cri.go:89] found id: ""
	I1009 18:31:45.375926   41166 logs.go:282] 0 containers: []
	W1009 18:31:45.375932   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:45.375936   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:45.375974   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:45.402366   41166 cri.go:89] found id: ""
	I1009 18:31:45.402380   41166 logs.go:282] 0 containers: []
	W1009 18:31:45.402386   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:45.402391   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:45.402432   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:45.428247   41166 cri.go:89] found id: ""
	I1009 18:31:45.428263   41166 logs.go:282] 0 containers: []
	W1009 18:31:45.428272   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:45.428278   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:45.428332   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:45.454072   41166 cri.go:89] found id: ""
	I1009 18:31:45.454087   41166 logs.go:282] 0 containers: []
	W1009 18:31:45.454094   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:45.454103   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:45.454173   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:45.479985   41166 cri.go:89] found id: ""
	I1009 18:31:45.480000   41166 logs.go:282] 0 containers: []
	W1009 18:31:45.480006   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:45.480012   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:45.480064   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:45.505956   41166 cri.go:89] found id: ""
	I1009 18:31:45.505972   41166 logs.go:282] 0 containers: []
	W1009 18:31:45.505980   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:45.505986   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:45.506041   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:45.530757   41166 cri.go:89] found id: ""
	I1009 18:31:45.530770   41166 logs.go:282] 0 containers: []
	W1009 18:31:45.530775   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:45.530782   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:45.530791   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:45.597676   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:45.597693   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:45.609290   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:45.609305   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:45.666583   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:45.659856   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.660431   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.661987   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.662451   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.663976   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:45.659856   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.660431   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.661987   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.662451   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.663976   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:45.666593   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:45.666604   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:45.730000   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:45.730018   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:48.259768   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:48.270482   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:48.270528   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:48.297438   41166 cri.go:89] found id: ""
	I1009 18:31:48.297454   41166 logs.go:282] 0 containers: []
	W1009 18:31:48.297462   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:48.297467   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:48.297510   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:48.323680   41166 cri.go:89] found id: ""
	I1009 18:31:48.323695   41166 logs.go:282] 0 containers: []
	W1009 18:31:48.323704   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:48.323710   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:48.323756   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:48.348422   41166 cri.go:89] found id: ""
	I1009 18:31:48.348437   41166 logs.go:282] 0 containers: []
	W1009 18:31:48.348445   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:48.348450   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:48.348507   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:48.373232   41166 cri.go:89] found id: ""
	I1009 18:31:48.373247   41166 logs.go:282] 0 containers: []
	W1009 18:31:48.373253   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:48.373263   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:48.373306   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:48.398755   41166 cri.go:89] found id: ""
	I1009 18:31:48.398770   41166 logs.go:282] 0 containers: []
	W1009 18:31:48.398776   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:48.398781   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:48.398822   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:48.423977   41166 cri.go:89] found id: ""
	I1009 18:31:48.423993   41166 logs.go:282] 0 containers: []
	W1009 18:31:48.423999   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:48.424004   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:48.424056   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:48.450473   41166 cri.go:89] found id: ""
	I1009 18:31:48.450486   41166 logs.go:282] 0 containers: []
	W1009 18:31:48.450492   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:48.450499   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:48.450510   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:48.461974   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:48.461997   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:48.519875   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:48.513250   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.513778   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.515240   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.515817   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.517350   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:48.513250   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.513778   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.515240   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.515817   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.517350   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:48.519884   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:48.519893   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:48.579801   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:48.579819   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:48.609008   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:48.609031   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:51.179735   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:51.190623   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:51.190689   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:51.215839   41166 cri.go:89] found id: ""
	I1009 18:31:51.215854   41166 logs.go:282] 0 containers: []
	W1009 18:31:51.215860   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:51.215866   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:51.215919   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:51.241754   41166 cri.go:89] found id: ""
	I1009 18:31:51.241771   41166 logs.go:282] 0 containers: []
	W1009 18:31:51.241781   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:51.241786   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:51.241834   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:51.269204   41166 cri.go:89] found id: ""
	I1009 18:31:51.269221   41166 logs.go:282] 0 containers: []
	W1009 18:31:51.269227   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:51.269233   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:51.269288   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:51.296498   41166 cri.go:89] found id: ""
	I1009 18:31:51.296514   41166 logs.go:282] 0 containers: []
	W1009 18:31:51.296522   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:51.296527   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:51.296573   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:51.323034   41166 cri.go:89] found id: ""
	I1009 18:31:51.323049   41166 logs.go:282] 0 containers: []
	W1009 18:31:51.323057   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:51.323063   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:51.323112   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:51.348104   41166 cri.go:89] found id: ""
	I1009 18:31:51.348119   41166 logs.go:282] 0 containers: []
	W1009 18:31:51.348125   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:51.348131   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:51.348199   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:51.374228   41166 cri.go:89] found id: ""
	I1009 18:31:51.374242   41166 logs.go:282] 0 containers: []
	W1009 18:31:51.374248   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:51.374255   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:51.374265   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:51.403810   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:51.403825   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:51.474611   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:51.474630   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:51.486750   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:51.486766   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:51.542637   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:51.535796   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.536370   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.537923   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.538394   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.539906   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:51.535796   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.536370   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.537923   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.538394   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.539906   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:51.542656   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:51.542666   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:54.103184   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:54.114409   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:54.114455   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:54.140634   41166 cri.go:89] found id: ""
	I1009 18:31:54.140646   41166 logs.go:282] 0 containers: []
	W1009 18:31:54.140652   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:54.140656   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:54.140703   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:54.166896   41166 cri.go:89] found id: ""
	I1009 18:31:54.166911   41166 logs.go:282] 0 containers: []
	W1009 18:31:54.166918   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:54.166922   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:54.166962   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:54.193155   41166 cri.go:89] found id: ""
	I1009 18:31:54.193170   41166 logs.go:282] 0 containers: []
	W1009 18:31:54.193176   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:54.193181   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:54.193222   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:54.217754   41166 cri.go:89] found id: ""
	I1009 18:31:54.217767   41166 logs.go:282] 0 containers: []
	W1009 18:31:54.217772   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:54.217777   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:54.217819   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:54.243823   41166 cri.go:89] found id: ""
	I1009 18:31:54.243837   41166 logs.go:282] 0 containers: []
	W1009 18:31:54.243843   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:54.243848   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:54.243887   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:54.271827   41166 cri.go:89] found id: ""
	I1009 18:31:54.271841   41166 logs.go:282] 0 containers: []
	W1009 18:31:54.271847   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:54.271852   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:54.271895   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:54.297907   41166 cri.go:89] found id: ""
	I1009 18:31:54.297920   41166 logs.go:282] 0 containers: []
	W1009 18:31:54.297925   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:54.297932   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:54.297942   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:54.365493   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:54.365510   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:54.377258   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:54.377275   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:54.432221   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:54.425355   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.425907   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.427547   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.427972   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.429614   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:54.425355   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.425907   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.427547   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.427972   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.429614   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:54.432234   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:54.432244   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:54.492172   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:54.492189   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:57.022444   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:57.033223   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:57.033285   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:57.059246   41166 cri.go:89] found id: ""
	I1009 18:31:57.059267   41166 logs.go:282] 0 containers: []
	W1009 18:31:57.059273   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:57.059277   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:57.059348   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:57.084187   41166 cri.go:89] found id: ""
	I1009 18:31:57.084199   41166 logs.go:282] 0 containers: []
	W1009 18:31:57.084205   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:57.084209   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:57.084250   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:57.109765   41166 cri.go:89] found id: ""
	I1009 18:31:57.109778   41166 logs.go:282] 0 containers: []
	W1009 18:31:57.109784   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:57.109788   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:57.109828   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:57.135796   41166 cri.go:89] found id: ""
	I1009 18:31:57.135809   41166 logs.go:282] 0 containers: []
	W1009 18:31:57.135817   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:57.135824   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:57.136027   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:57.162702   41166 cri.go:89] found id: ""
	I1009 18:31:57.162715   41166 logs.go:282] 0 containers: []
	W1009 18:31:57.162720   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:57.162724   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:57.162773   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:57.189575   41166 cri.go:89] found id: ""
	I1009 18:31:57.189588   41166 logs.go:282] 0 containers: []
	W1009 18:31:57.189594   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:57.189598   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:57.189639   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:57.214916   41166 cri.go:89] found id: ""
	I1009 18:31:57.214931   41166 logs.go:282] 0 containers: []
	W1009 18:31:57.214939   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:57.214946   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:57.214956   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:57.226333   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:57.226347   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:57.282176   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:57.275375   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.275847   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.277403   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.277780   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.279430   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:57.275375   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.275847   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.277403   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.277780   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.279430   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:57.282186   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:57.282196   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:57.341981   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:57.341999   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:57.372028   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:57.372043   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:59.940902   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:59.951810   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:59.951853   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:59.977888   41166 cri.go:89] found id: ""
	I1009 18:31:59.977902   41166 logs.go:282] 0 containers: []
	W1009 18:31:59.977908   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:59.977912   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:59.977977   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:32:00.004236   41166 cri.go:89] found id: ""
	I1009 18:32:00.004252   41166 logs.go:282] 0 containers: []
	W1009 18:32:00.004265   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:32:00.004293   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:32:00.004347   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:32:00.030808   41166 cri.go:89] found id: ""
	I1009 18:32:00.030826   41166 logs.go:282] 0 containers: []
	W1009 18:32:00.030836   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:32:00.030842   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:32:00.030895   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:32:00.056760   41166 cri.go:89] found id: ""
	I1009 18:32:00.056772   41166 logs.go:282] 0 containers: []
	W1009 18:32:00.056778   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:32:00.056782   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:32:00.056826   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:32:00.083048   41166 cri.go:89] found id: ""
	I1009 18:32:00.083062   41166 logs.go:282] 0 containers: []
	W1009 18:32:00.083068   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:32:00.083072   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:32:00.083116   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:32:00.109679   41166 cri.go:89] found id: ""
	I1009 18:32:00.109693   41166 logs.go:282] 0 containers: []
	W1009 18:32:00.109699   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:32:00.109704   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:32:00.109753   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:32:00.135808   41166 cri.go:89] found id: ""
	I1009 18:32:00.135820   41166 logs.go:282] 0 containers: []
	W1009 18:32:00.135826   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:32:00.135833   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:32:00.135841   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:32:00.192719   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:32:00.185431   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.185945   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.187601   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.188147   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.189704   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:32:00.185431   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.185945   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.187601   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.188147   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.189704   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:32:00.192732   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:32:00.192744   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:32:00.253264   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:32:00.253287   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:32:00.283450   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:32:00.283463   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:32:00.350291   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:32:00.350309   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:32:02.863750   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:32:02.874396   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:32:02.874434   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:32:02.900500   41166 cri.go:89] found id: ""
	I1009 18:32:02.900513   41166 logs.go:282] 0 containers: []
	W1009 18:32:02.900519   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:32:02.900523   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:32:02.900563   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:32:02.926067   41166 cri.go:89] found id: ""
	I1009 18:32:02.926083   41166 logs.go:282] 0 containers: []
	W1009 18:32:02.926092   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:32:02.926099   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:32:02.926157   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:32:02.951112   41166 cri.go:89] found id: ""
	I1009 18:32:02.951127   41166 logs.go:282] 0 containers: []
	W1009 18:32:02.951147   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:32:02.951154   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:32:02.951202   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:32:02.976038   41166 cri.go:89] found id: ""
	I1009 18:32:02.976052   41166 logs.go:282] 0 containers: []
	W1009 18:32:02.976057   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:32:02.976062   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:32:02.976114   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:32:03.001712   41166 cri.go:89] found id: ""
	I1009 18:32:03.001724   41166 logs.go:282] 0 containers: []
	W1009 18:32:03.001730   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:32:03.001734   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:32:03.001773   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:32:03.028181   41166 cri.go:89] found id: ""
	I1009 18:32:03.028195   41166 logs.go:282] 0 containers: []
	W1009 18:32:03.028201   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:32:03.028205   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:32:03.028247   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:32:03.054529   41166 cri.go:89] found id: ""
	I1009 18:32:03.054541   41166 logs.go:282] 0 containers: []
	W1009 18:32:03.054547   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:32:03.054554   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:32:03.054565   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:32:03.122196   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:32:03.122214   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:32:03.133617   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:32:03.133633   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:32:03.189282   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:32:03.182610   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.183115   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.184674   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.185052   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.186556   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:32:03.182610   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.183115   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.184674   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.185052   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.186556   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:32:03.189291   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:32:03.189301   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:32:03.252856   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:32:03.252874   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:32:05.784812   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:32:05.795352   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:32:05.795402   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:32:05.820276   41166 cri.go:89] found id: ""
	I1009 18:32:05.820289   41166 logs.go:282] 0 containers: []
	W1009 18:32:05.820295   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:32:05.820300   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:32:05.820341   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:32:05.846395   41166 cri.go:89] found id: ""
	I1009 18:32:05.846408   41166 logs.go:282] 0 containers: []
	W1009 18:32:05.846414   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:32:05.846418   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:32:05.846469   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:32:05.872185   41166 cri.go:89] found id: ""
	I1009 18:32:05.872199   41166 logs.go:282] 0 containers: []
	W1009 18:32:05.872205   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:32:05.872209   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:32:05.872254   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:32:05.898231   41166 cri.go:89] found id: ""
	I1009 18:32:05.898251   41166 logs.go:282] 0 containers: []
	W1009 18:32:05.898257   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:32:05.898263   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:32:05.898303   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:32:05.923683   41166 cri.go:89] found id: ""
	I1009 18:32:05.923699   41166 logs.go:282] 0 containers: []
	W1009 18:32:05.923707   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:32:05.923712   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:32:05.923755   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:32:05.949168   41166 cri.go:89] found id: ""
	I1009 18:32:05.949183   41166 logs.go:282] 0 containers: []
	W1009 18:32:05.949188   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:32:05.949193   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:32:05.949236   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:32:05.975320   41166 cri.go:89] found id: ""
	I1009 18:32:05.975332   41166 logs.go:282] 0 containers: []
	W1009 18:32:05.975338   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:32:05.975344   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:32:05.975354   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:32:06.041809   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:32:06.041827   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:32:06.054016   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:32:06.054040   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:32:06.110078   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:32:06.103223   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.103767   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.105448   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.105875   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.107466   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:32:06.103223   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.103767   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.105448   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.105875   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.107466   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:32:06.110088   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:32:06.110097   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:32:06.172545   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:32:06.172564   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:32:08.701488   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:32:08.712540   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:32:08.712594   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:32:08.738583   41166 cri.go:89] found id: ""
	I1009 18:32:08.738601   41166 logs.go:282] 0 containers: []
	W1009 18:32:08.738608   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:32:08.738613   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:32:08.738654   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:32:08.764379   41166 cri.go:89] found id: ""
	I1009 18:32:08.764396   41166 logs.go:282] 0 containers: []
	W1009 18:32:08.764404   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:32:08.764412   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:32:08.764466   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:32:08.790325   41166 cri.go:89] found id: ""
	I1009 18:32:08.790351   41166 logs.go:282] 0 containers: []
	W1009 18:32:08.790360   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:32:08.790367   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:32:08.790417   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:32:08.816765   41166 cri.go:89] found id: ""
	I1009 18:32:08.816780   41166 logs.go:282] 0 containers: []
	W1009 18:32:08.816788   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:32:08.816792   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:32:08.816844   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:32:08.842038   41166 cri.go:89] found id: ""
	I1009 18:32:08.842050   41166 logs.go:282] 0 containers: []
	W1009 18:32:08.842055   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:32:08.842060   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:32:08.842119   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:32:08.868221   41166 cri.go:89] found id: ""
	I1009 18:32:08.868236   41166 logs.go:282] 0 containers: []
	W1009 18:32:08.868243   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:32:08.868248   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:32:08.868291   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:32:08.894780   41166 cri.go:89] found id: ""
	I1009 18:32:08.894797   41166 logs.go:282] 0 containers: []
	W1009 18:32:08.894804   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:32:08.894810   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:32:08.894820   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:32:08.952094   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:32:08.944952   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.945523   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.947209   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.947687   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.949320   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:32:08.944952   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.945523   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.947209   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.947687   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.949320   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:32:08.952107   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:32:08.952121   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:32:09.012751   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:32:09.012769   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:32:09.042946   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:32:09.042958   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:32:09.111059   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:32:09.111076   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:32:11.624407   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:32:11.635246   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:32:11.635303   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:32:11.661128   41166 cri.go:89] found id: ""
	I1009 18:32:11.661159   41166 logs.go:282] 0 containers: []
	W1009 18:32:11.661167   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:32:11.661173   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:32:11.661225   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:32:11.685846   41166 cri.go:89] found id: ""
	I1009 18:32:11.685860   41166 logs.go:282] 0 containers: []
	W1009 18:32:11.685866   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:32:11.685870   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:32:11.685909   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:32:11.711700   41166 cri.go:89] found id: ""
	I1009 18:32:11.711714   41166 logs.go:282] 0 containers: []
	W1009 18:32:11.711719   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:32:11.711723   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:32:11.711770   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:32:11.737208   41166 cri.go:89] found id: ""
	I1009 18:32:11.737220   41166 logs.go:282] 0 containers: []
	W1009 18:32:11.737225   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:32:11.737230   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:32:11.737278   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:32:11.762359   41166 cri.go:89] found id: ""
	I1009 18:32:11.762370   41166 logs.go:282] 0 containers: []
	W1009 18:32:11.762376   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:32:11.762380   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:32:11.762430   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:32:11.787996   41166 cri.go:89] found id: ""
	I1009 18:32:11.788011   41166 logs.go:282] 0 containers: []
	W1009 18:32:11.788019   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:32:11.788024   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:32:11.788084   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:32:11.812657   41166 cri.go:89] found id: ""
	I1009 18:32:11.812671   41166 logs.go:282] 0 containers: []
	W1009 18:32:11.812677   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:32:11.812685   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:32:11.812694   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:32:11.879681   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:32:11.879697   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:32:11.891109   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:32:11.891124   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:32:11.947646   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:32:11.940720   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.941253   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.942799   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.943257   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.944825   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:32:11.940720   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.941253   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.942799   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.943257   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.944825   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:32:11.947659   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:32:11.947672   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:32:12.013733   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:32:12.013750   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:32:14.545559   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:32:14.556586   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:32:14.556634   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:32:14.584233   41166 cri.go:89] found id: ""
	I1009 18:32:14.584250   41166 logs.go:282] 0 containers: []
	W1009 18:32:14.584258   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:32:14.584263   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:32:14.584312   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:32:14.610477   41166 cri.go:89] found id: ""
	I1009 18:32:14.610493   41166 logs.go:282] 0 containers: []
	W1009 18:32:14.610500   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:32:14.610505   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:32:14.610560   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:32:14.635807   41166 cri.go:89] found id: ""
	I1009 18:32:14.635824   41166 logs.go:282] 0 containers: []
	W1009 18:32:14.635832   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:32:14.635837   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:32:14.635880   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:32:14.661016   41166 cri.go:89] found id: ""
	I1009 18:32:14.661034   41166 logs.go:282] 0 containers: []
	W1009 18:32:14.661043   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:32:14.661049   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:32:14.661098   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:32:14.689198   41166 cri.go:89] found id: ""
	I1009 18:32:14.689212   41166 logs.go:282] 0 containers: []
	W1009 18:32:14.689217   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:32:14.689223   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:32:14.689278   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:32:14.714892   41166 cri.go:89] found id: ""
	I1009 18:32:14.714908   41166 logs.go:282] 0 containers: []
	W1009 18:32:14.714917   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:32:14.714923   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:32:14.714971   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:32:14.740412   41166 cri.go:89] found id: ""
	I1009 18:32:14.740425   41166 logs.go:282] 0 containers: []
	W1009 18:32:14.740433   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:32:14.740440   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:32:14.740449   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:32:14.803421   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:32:14.803439   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:32:14.831580   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:32:14.831594   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:32:14.901628   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:32:14.901653   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:32:14.914304   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:32:14.914326   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:32:14.971146   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:32:14.964264   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.964764   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.966352   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.966731   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.968402   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:32:14.964264   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.964764   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.966352   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.966731   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.968402   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:32:17.472817   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:32:17.483574   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:32:17.483619   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:32:17.510868   41166 cri.go:89] found id: ""
	I1009 18:32:17.510882   41166 logs.go:282] 0 containers: []
	W1009 18:32:17.510891   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:32:17.510896   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:32:17.510956   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:32:17.537306   41166 cri.go:89] found id: ""
	I1009 18:32:17.537319   41166 logs.go:282] 0 containers: []
	W1009 18:32:17.537325   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:32:17.537329   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:32:17.537372   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:32:17.564957   41166 cri.go:89] found id: ""
	I1009 18:32:17.564972   41166 logs.go:282] 0 containers: []
	W1009 18:32:17.564978   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:32:17.564984   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:32:17.565039   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:32:17.591401   41166 cri.go:89] found id: ""
	I1009 18:32:17.591418   41166 logs.go:282] 0 containers: []
	W1009 18:32:17.591425   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:32:17.591430   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:32:17.591476   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:32:17.617237   41166 cri.go:89] found id: ""
	I1009 18:32:17.617250   41166 logs.go:282] 0 containers: []
	W1009 18:32:17.617256   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:32:17.617260   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:32:17.617302   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:32:17.642328   41166 cri.go:89] found id: ""
	I1009 18:32:17.642342   41166 logs.go:282] 0 containers: []
	W1009 18:32:17.642348   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:32:17.642352   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:32:17.642400   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:32:17.668302   41166 cri.go:89] found id: ""
	I1009 18:32:17.668315   41166 logs.go:282] 0 containers: []
	W1009 18:32:17.668321   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:32:17.668327   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:32:17.668336   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:32:17.679448   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:32:17.679463   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:32:17.736174   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:32:17.728959   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.729672   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.731395   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.731844   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.733446   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:32:17.728959   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.729672   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.731395   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.731844   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.733446   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:32:17.736227   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:32:17.736236   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:32:17.795423   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:32:17.795442   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:32:17.824553   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:32:17.824567   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:32:20.394282   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:32:20.405003   41166 kubeadm.go:601] duration metric: took 4m2.649024916s to restartPrimaryControlPlane
	W1009 18:32:20.405078   41166 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1009 18:32:20.405162   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 18:32:20.850567   41166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:32:20.863734   41166 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:32:20.872360   41166 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:32:20.872401   41166 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:32:20.880727   41166 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:32:20.880752   41166 kubeadm.go:157] found existing configuration files:
	
	I1009 18:32:20.880802   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1009 18:32:20.888758   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:32:20.888797   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:32:20.896370   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1009 18:32:20.904128   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:32:20.904188   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:32:20.911725   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1009 18:32:20.919740   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:32:20.919783   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:32:20.927592   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1009 18:32:20.935300   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:32:20.935348   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:32:20.942573   41166 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:32:20.998838   41166 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:32:21.055610   41166 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:36:23.829821   41166 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1009 18:36:23.829939   41166 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:36:23.832833   41166 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:36:23.832899   41166 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:36:23.833001   41166 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:36:23.833078   41166 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:36:23.833131   41166 kubeadm.go:318] OS: Linux
	I1009 18:36:23.833211   41166 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:36:23.833255   41166 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:36:23.833293   41166 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:36:23.833332   41166 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:36:23.833371   41166 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:36:23.833408   41166 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:36:23.833452   41166 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:36:23.833487   41166 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:36:23.833563   41166 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:36:23.833644   41166 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:36:23.833715   41166 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:36:23.833763   41166 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:36:23.836738   41166 out.go:252]   - Generating certificates and keys ...
	I1009 18:36:23.836809   41166 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:36:23.836876   41166 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:36:23.836946   41166 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 18:36:23.836995   41166 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 18:36:23.837054   41166 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 18:36:23.837106   41166 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 18:36:23.837180   41166 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 18:36:23.837230   41166 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 18:36:23.837295   41166 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 18:36:23.837361   41166 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 18:36:23.837391   41166 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 18:36:23.837444   41166 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:36:23.837485   41166 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:36:23.837544   41166 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:36:23.837590   41166 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:36:23.837644   41166 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:36:23.837687   41166 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:36:23.837754   41166 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:36:23.837807   41166 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:36:23.840574   41166 out.go:252]   - Booting up control plane ...
	I1009 18:36:23.840651   41166 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:36:23.840709   41166 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:36:23.840759   41166 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:36:23.840847   41166 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:36:23.840933   41166 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:36:23.841023   41166 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:36:23.841122   41166 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:36:23.841176   41166 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:36:23.841286   41166 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:36:23.841382   41166 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:36:23.841430   41166 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500920961s
	I1009 18:36:23.841508   41166 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:36:23.841575   41166 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1009 18:36:23.841650   41166 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:36:23.841721   41166 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:36:23.841779   41166 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000193088s
	I1009 18:36:23.841844   41166 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000216272s
	I1009 18:36:23.841921   41166 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000612564s
	I1009 18:36:23.841927   41166 kubeadm.go:318] 
	I1009 18:36:23.842001   41166 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:36:23.842071   41166 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:36:23.842160   41166 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:36:23.842237   41166 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:36:23.842297   41166 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:36:23.842366   41166 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:36:23.842394   41166 kubeadm.go:318] 
	W1009 18:36:23.842478   41166 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500920961s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000193088s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000216272s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000612564s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 18:36:23.842555   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 18:36:24.285465   41166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:36:24.298222   41166 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:36:24.298276   41166 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:36:24.306625   41166 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:36:24.306635   41166 kubeadm.go:157] found existing configuration files:
	
	I1009 18:36:24.306675   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1009 18:36:24.314710   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:36:24.314750   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:36:24.322418   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1009 18:36:24.330123   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:36:24.330187   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:36:24.337953   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1009 18:36:24.346125   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:36:24.346179   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:36:24.354153   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1009 18:36:24.362094   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:36:24.362133   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:36:24.369784   41166 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:36:24.426834   41166 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:36:24.485641   41166 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:40:27.797583   41166 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 18:40:27.797662   41166 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:40:27.800620   41166 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:40:27.800659   41166 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:40:27.800736   41166 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:40:27.800783   41166 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:40:27.800811   41166 kubeadm.go:318] OS: Linux
	I1009 18:40:27.800847   41166 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:40:27.800885   41166 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:40:27.800924   41166 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:40:27.800985   41166 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:40:27.801052   41166 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:40:27.801090   41166 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:40:27.801156   41166 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:40:27.801201   41166 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:40:27.801265   41166 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:40:27.801343   41166 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:40:27.801412   41166 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:40:27.801484   41166 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:40:27.805055   41166 out.go:252]   - Generating certificates and keys ...
	I1009 18:40:27.805120   41166 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:40:27.805218   41166 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:40:27.805293   41166 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 18:40:27.805339   41166 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 18:40:27.805412   41166 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 18:40:27.805457   41166 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 18:40:27.805510   41166 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 18:40:27.805564   41166 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 18:40:27.805620   41166 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 18:40:27.805693   41166 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 18:40:27.805748   41166 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 18:40:27.805808   41166 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:40:27.805852   41166 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:40:27.805907   41166 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:40:27.805950   41166 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:40:27.805998   41166 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:40:27.806045   41166 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:40:27.806113   41166 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:40:27.806212   41166 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:40:27.807603   41166 out.go:252]   - Booting up control plane ...
	I1009 18:40:27.807673   41166 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:40:27.807748   41166 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:40:27.807805   41166 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:40:27.807888   41166 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:40:27.807967   41166 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:40:27.808054   41166 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:40:27.808118   41166 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:40:27.808182   41166 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:40:27.808282   41166 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:40:27.808373   41166 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:40:27.808424   41166 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000969803s
	I1009 18:40:27.808512   41166 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:40:27.808585   41166 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1009 18:40:27.808667   41166 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:40:27.808740   41166 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:40:27.808798   41166 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000410729s
	I1009 18:40:27.808855   41166 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000637307s
	I1009 18:40:27.808919   41166 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000528535s
	I1009 18:40:27.808921   41166 kubeadm.go:318] 
	I1009 18:40:27.808989   41166 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:40:27.809052   41166 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:40:27.809124   41166 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:40:27.809239   41166 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:40:27.809297   41166 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:40:27.809386   41166 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:40:27.809399   41166 kubeadm.go:318] 
	I1009 18:40:27.809438   41166 kubeadm.go:402] duration metric: took 12m10.090749097s to StartCluster
	I1009 18:40:27.809468   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:40:27.809513   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:40:27.837743   41166 cri.go:89] found id: ""
	I1009 18:40:27.837757   41166 logs.go:282] 0 containers: []
	W1009 18:40:27.837763   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:40:27.837768   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:40:27.837814   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:40:27.863718   41166 cri.go:89] found id: ""
	I1009 18:40:27.863732   41166 logs.go:282] 0 containers: []
	W1009 18:40:27.863738   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:40:27.863748   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:40:27.863792   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:40:27.889900   41166 cri.go:89] found id: ""
	I1009 18:40:27.889914   41166 logs.go:282] 0 containers: []
	W1009 18:40:27.889920   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:40:27.889924   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:40:27.889980   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:40:27.916941   41166 cri.go:89] found id: ""
	I1009 18:40:27.916954   41166 logs.go:282] 0 containers: []
	W1009 18:40:27.916960   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:40:27.916965   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:40:27.917024   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:40:27.943791   41166 cri.go:89] found id: ""
	I1009 18:40:27.943804   41166 logs.go:282] 0 containers: []
	W1009 18:40:27.943809   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:40:27.943814   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:40:27.943860   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:40:27.970612   41166 cri.go:89] found id: ""
	I1009 18:40:27.970625   41166 logs.go:282] 0 containers: []
	W1009 18:40:27.970631   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:40:27.970635   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:40:27.970683   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:40:27.997688   41166 cri.go:89] found id: ""
	I1009 18:40:27.997700   41166 logs.go:282] 0 containers: []
	W1009 18:40:27.997706   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:40:27.997713   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:40:27.997721   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:40:28.064711   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:40:28.064730   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:40:28.076960   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:40:28.076978   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:40:28.135195   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:40:28.128400   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.128940   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.130597   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.131014   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.132350   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:40:28.128400   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.128940   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.130597   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.131014   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.132350   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:40:28.135206   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:40:28.135216   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:40:28.194198   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:40:28.194216   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 18:40:28.224308   41166 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000969803s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000410729s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000637307s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000528535s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 18:40:28.224355   41166 out.go:285] * 
	W1009 18:40:28.224482   41166 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000969803s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000410729s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000637307s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000528535s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:40:28.224505   41166 out.go:285] * 
	W1009 18:40:28.226335   41166 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:40:28.230950   41166 out.go:203] 
	W1009 18:40:28.232526   41166 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000969803s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000410729s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000637307s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000528535s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:40:28.232549   41166 out.go:285] * 
	I1009 18:40:28.235189   41166 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 18:40:18 functional-753440 crio[5806]: time="2025-10-09T18:40:18.559856771Z" level=info msg="createCtr: removing container 6cfef3edc6147e087968e3f2d08f61fb35db583f5e28cfa8c249e7d8c468f2e3" id=e4032d5b-6769-400b-9fc3-eb15e886a960 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:18 functional-753440 crio[5806]: time="2025-10-09T18:40:18.55988579Z" level=info msg="createCtr: deleting container 6cfef3edc6147e087968e3f2d08f61fb35db583f5e28cfa8c249e7d8c468f2e3 from storage" id=e4032d5b-6769-400b-9fc3-eb15e886a960 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:18 functional-753440 crio[5806]: time="2025-10-09T18:40:18.561982426Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-753440_kube-system_ddd5b817e547272bbbe5e6f0c16b8e98_0" id=e4032d5b-6769-400b-9fc3-eb15e886a960 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.535965116Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=3284e07b-75ee-46a6-a72b-ddde93b90caa name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.536050816Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=9e994620-4824-455d-923b-3113ce0f0b1f name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.536889445Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=b38779c4-30fd-49bf-973c-6c4d39ff8058 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.536905356Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=8cefb306-d6f2-4ffb-9148-752414ba0fc7 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.537760532Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-753440/kube-scheduler" id=1ded6b43-d118-4b70-8e5b-dd4aabd427f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.537894673Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-753440/kube-apiserver" id=8c417e3f-7b5d-44f6-8082-13c142c8285b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.537977793Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.538073289Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.543131713Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.543583723Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.544482915Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.544937894Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.561232429Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=1ded6b43-d118-4b70-8e5b-dd4aabd427f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.562513475Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8c417e3f-7b5d-44f6-8082-13c142c8285b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.562718874Z" level=info msg="createCtr: deleting container ID 5089e63580fa138163a5434d6774e70806fd3b2b61a6691fd756e551d2db1984 from idIndex" id=1ded6b43-d118-4b70-8e5b-dd4aabd427f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.562744364Z" level=info msg="createCtr: removing container 5089e63580fa138163a5434d6774e70806fd3b2b61a6691fd756e551d2db1984" id=1ded6b43-d118-4b70-8e5b-dd4aabd427f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.562773215Z" level=info msg="createCtr: deleting container 5089e63580fa138163a5434d6774e70806fd3b2b61a6691fd756e551d2db1984 from storage" id=1ded6b43-d118-4b70-8e5b-dd4aabd427f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.563961674Z" level=info msg="createCtr: deleting container ID d0f3203170f1bf851cc5c3e7e264334abf2f4f7569a6b5394a7218431338d323 from idIndex" id=8c417e3f-7b5d-44f6-8082-13c142c8285b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.564064963Z" level=info msg="createCtr: removing container d0f3203170f1bf851cc5c3e7e264334abf2f4f7569a6b5394a7218431338d323" id=8c417e3f-7b5d-44f6-8082-13c142c8285b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.564114864Z" level=info msg="createCtr: deleting container d0f3203170f1bf851cc5c3e7e264334abf2f4f7569a6b5394a7218431338d323 from storage" id=8c417e3f-7b5d-44f6-8082-13c142c8285b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.56610003Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-753440_kube-system_c3332277da3037b9d30e61510b9fdccb_0" id=1ded6b43-d118-4b70-8e5b-dd4aabd427f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.566508491Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-753440_kube-system_0d946ec5c615de29dae011722e300735_0" id=8c417e3f-7b5d-44f6-8082-13c142c8285b name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:40:29.380706   15662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:29.381208   15662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:29.382828   15662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:29.383488   15662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:29.384922   15662 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:40:29 up  1:22,  0 user,  load average: 0.12, 0.06, 0.07
	Linux functional-753440 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 18:40:18 functional-753440 kubelet[14909]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-753440_kube-system(ddd5b817e547272bbbe5e6f0c16b8e98): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:18 functional-753440 kubelet[14909]:  > logger="UnhandledError"
	Oct 09 18:40:18 functional-753440 kubelet[14909]: E1009 18:40:18.562457   14909 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-753440" podUID="ddd5b817e547272bbbe5e6f0c16b8e98"
	Oct 09 18:40:21 functional-753440 kubelet[14909]: E1009 18:40:21.342810   14909 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-753440.186ce67effdfc72b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-753440,UID:functional-753440,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-753440 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-753440,},FirstTimestamp:2025-10-09 18:36:27.528144683 +0000 UTC m=+0.734831963,LastTimestamp:2025-10-09 18:36:27.528144683 +0000 UTC m=+0.734831963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-753440,}"
	Oct 09 18:40:24 functional-753440 kubelet[14909]: E1009 18:40:24.158025   14909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-753440?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 09 18:40:24 functional-753440 kubelet[14909]: I1009 18:40:24.313592   14909 kubelet_node_status.go:75] "Attempting to register node" node="functional-753440"
	Oct 09 18:40:24 functional-753440 kubelet[14909]: E1009 18:40:24.313949   14909 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-753440"
	Oct 09 18:40:27 functional-753440 kubelet[14909]: E1009 18:40:27.535584   14909 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753440\" not found" node="functional-753440"
	Oct 09 18:40:27 functional-753440 kubelet[14909]: E1009 18:40:27.535727   14909 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753440\" not found" node="functional-753440"
	Oct 09 18:40:27 functional-753440 kubelet[14909]: E1009 18:40:27.550512   14909 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-753440\" not found"
	Oct 09 18:40:27 functional-753440 kubelet[14909]: E1009 18:40:27.566413   14909 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:40:27 functional-753440 kubelet[14909]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:27 functional-753440 kubelet[14909]:  > podSandboxID="7a4353736f4a4433982204579f641a25b7ce51b570588adf77ed233c5025e9dc"
	Oct 09 18:40:27 functional-753440 kubelet[14909]: E1009 18:40:27.566505   14909 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:40:27 functional-753440 kubelet[14909]:         container kube-scheduler start failed in pod kube-scheduler-functional-753440_kube-system(c3332277da3037b9d30e61510b9fdccb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:27 functional-753440 kubelet[14909]:  > logger="UnhandledError"
	Oct 09 18:40:27 functional-753440 kubelet[14909]: E1009 18:40:27.566536   14909 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-753440" podUID="c3332277da3037b9d30e61510b9fdccb"
	Oct 09 18:40:27 functional-753440 kubelet[14909]: E1009 18:40:27.566767   14909 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:40:27 functional-753440 kubelet[14909]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:27 functional-753440 kubelet[14909]:  > podSandboxID="6fa88d0d4dd2687a2039db7efc159391e5e7ed9ab6f5700abe409768183910fe"
	Oct 09 18:40:27 functional-753440 kubelet[14909]: E1009 18:40:27.566838   14909 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:40:27 functional-753440 kubelet[14909]:         container kube-apiserver start failed in pod kube-apiserver-functional-753440_kube-system(0d946ec5c615de29dae011722e300735): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:27 functional-753440 kubelet[14909]:  > logger="UnhandledError"
	Oct 09 18:40:27 functional-753440 kubelet[14909]: E1009 18:40:27.567563   14909 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-753440" podUID="0d946ec5c615de29dae011722e300735"
	Oct 09 18:40:28 functional-753440 kubelet[14909]: E1009 18:40:28.847450   14909 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753440 -n functional-753440
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753440 -n functional-753440: exit status 2 (297.813416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753440" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (736.04s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (1.9s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-753440 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-753440 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (52.507099ms)

                                                
                                                
** stderr ** 
	E1009 18:40:30.163854   54388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:30.164283   54388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:30.165775   54388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:30.166329   54388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:30.167796   54388 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-753440 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753440
helpers_test.go:243: (dbg) docker inspect functional-753440:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205",
	        "Created": "2025-10-09T18:13:38.612842612Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 29511,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:13:38.64668907Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/hostname",
	        "HostsPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/hosts",
	        "LogPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205-json.log",
	        "Name": "/functional-753440",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753440:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-753440",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205",
	                "LowerDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753440",
	                "Source": "/var/lib/docker/volumes/functional-753440/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753440",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753440",
	                "name.minikube.sigs.k8s.io": "functional-753440",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d81e656cb7fd298b6be7b84ddafb7e6d0b2df1b9904e1c444b24eb780385409d",
	            "SandboxKey": "/var/run/docker/netns/d81e656cb7fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753440": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:52:a9:f3:ce:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d69cee380b2506f35d197ee18a95b90b110e191b547e1220873c5484ffc92ad3",
	                    "EndpointID": "2f780bc31b7359d4036c8b32e09c7f7657923ca8c46e8392506706282465c3ec",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753440",
	                        "694bf539948e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-753440 -n functional-753440
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-753440 -n functional-753440: exit status 2 (298.078469ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 logs -n 25
helpers_test.go:260: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ unpause │ nospam-663194 --log_dir /tmp/nospam-663194 unpause                                                            │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ unpause │ nospam-663194 --log_dir /tmp/nospam-663194 unpause                                                            │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ unpause │ nospam-663194 --log_dir /tmp/nospam-663194 unpause                                                            │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ stop    │ nospam-663194 --log_dir /tmp/nospam-663194 stop                                                               │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ stop    │ nospam-663194 --log_dir /tmp/nospam-663194 stop                                                               │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ stop    │ nospam-663194 --log_dir /tmp/nospam-663194 stop                                                               │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ delete  │ -p nospam-663194                                                                                              │ nospam-663194     │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │ 09 Oct 25 18:13 UTC │
	│ start   │ -p functional-753440 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:13 UTC │                     │
	│ start   │ -p functional-753440 --alsologtostderr -v=8                                                                   │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:21 UTC │                     │
	│ cache   │ functional-753440 cache add registry.k8s.io/pause:3.1                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ functional-753440 cache add registry.k8s.io/pause:3.3                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ functional-753440 cache add registry.k8s.io/pause:latest                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ functional-753440 cache add minikube-local-cache-test:functional-753440                                       │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ functional-753440 cache delete minikube-local-cache-test:functional-753440                                    │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ ssh     │ functional-753440 ssh sudo crictl images                                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ ssh     │ functional-753440 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ ssh     │ functional-753440 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │                     │
	│ cache   │ functional-753440 cache reload                                                                                │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ ssh     │ functional-753440 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │ 09 Oct 25 18:28 UTC │
	│ kubectl │ functional-753440 kubectl -- --context functional-753440 get pods                                             │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │                     │
	│ start   │ -p functional-753440 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:28:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:28:14.121358   41166 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:28:14.121581   41166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:28:14.121584   41166 out.go:374] Setting ErrFile to fd 2...
	I1009 18:28:14.121587   41166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:28:14.121762   41166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:28:14.122238   41166 out.go:368] Setting JSON to false
	I1009 18:28:14.123079   41166 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4242,"bootTime":1760030252,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:28:14.123169   41166 start.go:141] virtualization: kvm guest
	I1009 18:28:14.126034   41166 out.go:179] * [functional-753440] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:28:14.127592   41166 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:28:14.127614   41166 notify.go:220] Checking for updates...
	I1009 18:28:14.130226   41166 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:28:14.131542   41166 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:28:14.132869   41166 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:28:14.134010   41166 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:28:14.135272   41166 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:28:14.137002   41166 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:28:14.137147   41166 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:28:14.160624   41166 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:28:14.160747   41166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:28:14.216904   41166 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-09 18:28:14.207579982 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:28:14.216988   41166 docker.go:318] overlay module found
	I1009 18:28:14.218985   41166 out.go:179] * Using the docker driver based on existing profile
	I1009 18:28:14.220343   41166 start.go:305] selected driver: docker
	I1009 18:28:14.220350   41166 start.go:925] validating driver "docker" against &{Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:28:14.220421   41166 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:28:14.220493   41166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:28:14.276259   41166 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-09 18:28:14.266635533 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:28:14.276841   41166 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:28:14.276862   41166 cni.go:84] Creating CNI manager for ""
	I1009 18:28:14.276912   41166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:28:14.276975   41166 start.go:349] cluster config:
	{Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:28:14.279613   41166 out.go:179] * Starting "functional-753440" primary control-plane node in "functional-753440" cluster
	I1009 18:28:14.281054   41166 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:28:14.282608   41166 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:28:14.283987   41166 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:28:14.284021   41166 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:28:14.284028   41166 cache.go:64] Caching tarball of preloaded images
	I1009 18:28:14.284084   41166 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:28:14.284156   41166 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:28:14.284167   41166 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:28:14.284262   41166 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/config.json ...
	I1009 18:28:14.304989   41166 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 18:28:14.304998   41166 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 18:28:14.305012   41166 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:28:14.305037   41166 start.go:360] acquireMachinesLock for functional-753440: {Name:mka6dd10318522f9d68a16550e4b04812fa22004 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:28:14.305103   41166 start.go:364] duration metric: took 53.763µs to acquireMachinesLock for "functional-753440"
	I1009 18:28:14.305117   41166 start.go:96] Skipping create...Using existing machine configuration
	I1009 18:28:14.305123   41166 fix.go:54] fixHost starting: 
	I1009 18:28:14.305350   41166 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
	I1009 18:28:14.322441   41166 fix.go:112] recreateIfNeeded on functional-753440: state=Running err=<nil>
	W1009 18:28:14.322475   41166 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 18:28:14.324442   41166 out.go:252] * Updating the running docker "functional-753440" container ...
	I1009 18:28:14.324473   41166 machine.go:93] provisionDockerMachine start ...
	I1009 18:28:14.324533   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:14.341338   41166 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:14.341548   41166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:28:14.341554   41166 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:28:14.486226   41166 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753440
	
	I1009 18:28:14.486250   41166 ubuntu.go:182] provisioning hostname "functional-753440"
	I1009 18:28:14.486345   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:14.504505   41166 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:14.504708   41166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:28:14.504715   41166 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-753440 && echo "functional-753440" | sudo tee /etc/hostname
	I1009 18:28:14.659579   41166 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753440
	
	I1009 18:28:14.659644   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:14.677783   41166 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:14.677973   41166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:28:14.677983   41166 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753440' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753440/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753440' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:28:14.823918   41166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:28:14.823946   41166 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 18:28:14.823965   41166 ubuntu.go:190] setting up certificates
	I1009 18:28:14.823972   41166 provision.go:84] configureAuth start
	I1009 18:28:14.824015   41166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753440
	I1009 18:28:14.841567   41166 provision.go:143] copyHostCerts
	I1009 18:28:14.841617   41166 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 18:28:14.841630   41166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:28:14.841693   41166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 18:28:14.841773   41166 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 18:28:14.841776   41166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:28:14.841800   41166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 18:28:14.841852   41166 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 18:28:14.841854   41166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:28:14.841874   41166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 18:28:14.841914   41166 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.functional-753440 san=[127.0.0.1 192.168.49.2 functional-753440 localhost minikube]
	I1009 18:28:14.981751   41166 provision.go:177] copyRemoteCerts
	I1009 18:28:14.981793   41166 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:28:14.981823   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:14.999896   41166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:28:15.102707   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 18:28:15.120896   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 18:28:15.138889   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:28:15.156869   41166 provision.go:87] duration metric: took 332.885748ms to configureAuth
	I1009 18:28:15.156885   41166 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:28:15.157034   41166 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:28:15.157151   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:15.175195   41166 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:15.175399   41166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:28:15.175409   41166 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:28:15.452446   41166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:28:15.452465   41166 machine.go:96] duration metric: took 1.127985417s to provisionDockerMachine
	I1009 18:28:15.452477   41166 start.go:293] postStartSetup for "functional-753440" (driver="docker")
	I1009 18:28:15.452491   41166 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:28:15.452568   41166 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:28:15.452629   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:15.470937   41166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:28:15.575864   41166 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:28:15.579955   41166 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:28:15.579971   41166 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:28:15.579990   41166 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 18:28:15.580053   41166 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 18:28:15.580152   41166 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 18:28:15.580226   41166 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/test/nested/copy/14880/hosts -> hosts in /etc/test/nested/copy/14880
	I1009 18:28:15.580265   41166 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/14880
	I1009 18:28:15.588947   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:28:15.607328   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/test/nested/copy/14880/hosts --> /etc/test/nested/copy/14880/hosts (40 bytes)
	I1009 18:28:15.625331   41166 start.go:296] duration metric: took 172.840814ms for postStartSetup
	I1009 18:28:15.625414   41166 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:28:15.625450   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:15.644868   41166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:28:15.745460   41166 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:28:15.750036   41166 fix.go:56] duration metric: took 1.444904813s for fixHost
	I1009 18:28:15.750054   41166 start.go:83] releasing machines lock for "functional-753440", held for 1.444944565s
	I1009 18:28:15.750113   41166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753440
	I1009 18:28:15.768383   41166 ssh_runner.go:195] Run: cat /version.json
	I1009 18:28:15.768426   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:15.768462   41166 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:28:15.768509   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:15.787244   41166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:28:15.788794   41166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:28:15.887419   41166 ssh_runner.go:195] Run: systemctl --version
	I1009 18:28:15.939267   41166 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:28:15.975115   41166 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:28:15.980039   41166 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:28:15.980121   41166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:28:15.988843   41166 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 18:28:15.988855   41166 start.go:495] detecting cgroup driver to use...
	I1009 18:28:15.988896   41166 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:28:15.988937   41166 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:28:16.003980   41166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:28:16.017315   41166 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:28:16.017382   41166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:28:16.032779   41166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:28:16.045881   41166 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:28:16.126678   41166 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:28:16.213883   41166 docker.go:234] disabling docker service ...
	I1009 18:28:16.213927   41166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:28:16.229180   41166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:28:16.242501   41166 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:28:16.328471   41166 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:28:16.418726   41166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:28:16.432452   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:28:16.447044   41166 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:28:16.447090   41166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:16.456711   41166 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 18:28:16.456763   41166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:16.466740   41166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:16.476505   41166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:16.485804   41166 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:28:16.494457   41166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:16.504131   41166 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:16.513460   41166 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:16.522986   41166 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:28:16.531036   41166 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:28:16.539288   41166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:28:16.625799   41166 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:28:16.734227   41166 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:28:16.734392   41166 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:28:16.738753   41166 start.go:563] Will wait 60s for crictl version
	I1009 18:28:16.738810   41166 ssh_runner.go:195] Run: which crictl
	I1009 18:28:16.742485   41166 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:28:16.767659   41166 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:28:16.767722   41166 ssh_runner.go:195] Run: crio --version
	I1009 18:28:16.796602   41166 ssh_runner.go:195] Run: crio --version
	I1009 18:28:16.826463   41166 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:28:16.827844   41166 cli_runner.go:164] Run: docker network inspect functional-753440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:28:16.845122   41166 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:28:16.851283   41166 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1009 18:28:16.852593   41166 kubeadm.go:883] updating cluster {Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:28:16.852703   41166 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:28:16.852758   41166 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:28:16.885854   41166 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:28:16.885865   41166 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:28:16.885914   41166 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:28:16.911537   41166 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:28:16.911549   41166 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:28:16.911554   41166 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1009 18:28:16.911659   41166 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-753440 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:28:16.911716   41166 ssh_runner.go:195] Run: crio config
	I1009 18:28:16.959392   41166 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1009 18:28:16.959415   41166 cni.go:84] Creating CNI manager for ""
	I1009 18:28:16.959431   41166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:28:16.959447   41166 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:28:16.959474   41166 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753440 NodeName:functional-753440 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:28:16.959581   41166 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-753440"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:28:16.959637   41166 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:28:16.967720   41166 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:28:16.967786   41166 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:28:16.975557   41166 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 18:28:16.988463   41166 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:28:17.001726   41166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1009 18:28:17.014711   41166 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 18:28:17.018916   41166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:28:17.102967   41166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:28:17.116133   41166 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440 for IP: 192.168.49.2
	I1009 18:28:17.116168   41166 certs.go:195] generating shared ca certs ...
	I1009 18:28:17.116186   41166 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:17.116310   41166 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 18:28:17.116344   41166 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 18:28:17.116350   41166 certs.go:257] generating profile certs ...
	I1009 18:28:17.116439   41166 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.key
	I1009 18:28:17.116473   41166 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.key.01289d3a
	I1009 18:28:17.116504   41166 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.key
	I1009 18:28:17.116599   41166 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 18:28:17.116623   41166 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 18:28:17.116628   41166 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:28:17.116647   41166 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 18:28:17.116699   41166 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:28:17.116718   41166 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 18:28:17.116754   41166 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:28:17.117319   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:28:17.135881   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:28:17.153983   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:28:17.171867   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:28:17.189721   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 18:28:17.208056   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:28:17.226995   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:28:17.245251   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:28:17.263239   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 18:28:17.281041   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 18:28:17.298701   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:28:17.316541   41166 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:28:17.329669   41166 ssh_runner.go:195] Run: openssl version
	I1009 18:28:17.335820   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:28:17.344631   41166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:17.348564   41166 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:17.348610   41166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:17.382973   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:28:17.391446   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 18:28:17.399936   41166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 18:28:17.403644   41166 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:28:17.403697   41166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 18:28:17.438115   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 18:28:17.446527   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 18:28:17.455201   41166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 18:28:17.459043   41166 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:28:17.459093   41166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 18:28:17.494448   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:28:17.503208   41166 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:28:17.507381   41166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 18:28:17.542560   41166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 18:28:17.577279   41166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 18:28:17.612414   41166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 18:28:17.648669   41166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 18:28:17.684353   41166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 18:28:17.718697   41166 kubeadm.go:400] StartCluster: {Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:28:17.718762   41166 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:28:17.718816   41166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:28:17.747722   41166 cri.go:89] found id: ""
	I1009 18:28:17.747771   41166 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:28:17.755951   41166 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 18:28:17.755970   41166 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 18:28:17.756013   41166 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 18:28:17.763739   41166 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:28:17.764201   41166 kubeconfig.go:125] found "functional-753440" server: "https://192.168.49.2:8441"
	I1009 18:28:17.765394   41166 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 18:28:17.773512   41166 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-09 18:13:46.132659514 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-09 18:28:17.012910366 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1009 18:28:17.773526   41166 kubeadm.go:1160] stopping kube-system containers ...
	I1009 18:28:17.773536   41166 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 18:28:17.773573   41166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:28:17.801424   41166 cri.go:89] found id: ""
	I1009 18:28:17.801491   41166 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 18:28:17.844900   41166 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:28:17.853365   41166 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5635 Oct  9 18:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct  9 18:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct  9 18:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct  9 18:17 /etc/kubernetes/scheduler.conf
	
	I1009 18:28:17.853413   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1009 18:28:17.861284   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1009 18:28:17.869531   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:28:17.869582   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:28:17.877552   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1009 18:28:17.885384   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:28:17.885430   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:28:17.893514   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1009 18:28:17.901554   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:28:17.901605   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:28:17.910046   41166 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:28:17.918503   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:28:17.960612   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:28:19.029109   41166 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.068473628s)
	I1009 18:28:19.029180   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:28:19.195034   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:28:19.243702   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:28:19.294305   41166 api_server.go:52] waiting for apiserver process to appear ...
	I1009 18:28:19.294364   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:19.794527   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:20.295201   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:20.794575   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:21.295315   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:21.795156   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:22.294825   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:22.794676   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:23.295341   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:23.795290   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:24.295084   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:24.794558   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:25.295301   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:25.794886   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:26.295362   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:26.795204   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:27.295068   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:27.794501   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:28.295278   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:28.795020   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:29.294945   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:29.795382   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:30.294824   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:30.794608   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:31.295203   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:31.795244   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:32.294545   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:32.794712   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:33.294432   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:33.795152   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:34.294924   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:34.794572   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:35.295260   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:35.794912   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:36.294546   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:36.795240   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:37.294721   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:37.794468   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:38.295324   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:38.795118   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:39.295123   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:39.795377   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:40.294883   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:40.795163   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:41.294810   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:41.794568   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:42.295334   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:42.795216   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:43.294867   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:43.794631   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:44.294584   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:44.795416   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:45.294988   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:45.795459   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:46.295344   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:46.794912   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:47.294535   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:47.795297   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:48.294813   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:48.794435   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:49.295044   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:49.794820   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:50.294561   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:50.795171   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:51.295301   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:51.794820   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:52.295356   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:52.795166   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:53.294824   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:53.795465   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:54.295177   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:54.794443   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:55.294528   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:55.794977   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:56.294481   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:56.795276   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:57.295436   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:57.795235   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:58.294498   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:58.794950   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:59.294720   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:59.794600   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:00.295262   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:00.794624   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:01.294757   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:01.794835   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:02.294745   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:02.795101   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:03.295356   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:03.794515   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:04.294776   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:04.794940   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:05.295069   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:05.794648   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:06.294527   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:06.794749   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:07.294659   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:07.795339   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:08.295340   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:08.795175   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:09.294617   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:09.795133   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:10.295346   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:10.795313   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:11.295322   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:11.794750   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:12.294795   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:12.794516   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:13.295074   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:13.794456   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:14.294872   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:14.794437   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:15.294584   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:15.794709   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:16.295308   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:16.795334   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:17.294662   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:17.795191   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:18.294594   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:18.794871   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:19.295378   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:19.295433   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:19.321387   41166 cri.go:89] found id: ""
	I1009 18:29:19.321402   41166 logs.go:282] 0 containers: []
	W1009 18:29:19.321411   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:19.321418   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:19.321468   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:19.348366   41166 cri.go:89] found id: ""
	I1009 18:29:19.348380   41166 logs.go:282] 0 containers: []
	W1009 18:29:19.348387   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:19.348391   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:19.348435   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:19.374894   41166 cri.go:89] found id: ""
	I1009 18:29:19.374906   41166 logs.go:282] 0 containers: []
	W1009 18:29:19.374912   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:19.374916   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:19.374955   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:19.401088   41166 cri.go:89] found id: ""
	I1009 18:29:19.401106   41166 logs.go:282] 0 containers: []
	W1009 18:29:19.401114   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:19.401121   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:19.401191   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:19.428021   41166 cri.go:89] found id: ""
	I1009 18:29:19.428033   41166 logs.go:282] 0 containers: []
	W1009 18:29:19.428043   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:19.428047   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:19.428105   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:19.454576   41166 cri.go:89] found id: ""
	I1009 18:29:19.454590   41166 logs.go:282] 0 containers: []
	W1009 18:29:19.454595   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:19.454599   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:19.454639   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:19.480743   41166 cri.go:89] found id: ""
	I1009 18:29:19.480760   41166 logs.go:282] 0 containers: []
	W1009 18:29:19.480767   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:19.480774   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:19.480783   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:19.509728   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:19.509743   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:19.578764   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:19.578781   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:19.590528   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:19.590544   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:19.646752   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:19.639577    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.640309    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.641990    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.642451    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.643983    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:19.639577    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.640309    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.641990    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.642451    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.643983    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:19.646773   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:19.646784   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:22.208868   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:22.219498   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:22.219549   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:22.245808   41166 cri.go:89] found id: ""
	I1009 18:29:22.245825   41166 logs.go:282] 0 containers: []
	W1009 18:29:22.245833   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:22.245839   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:22.245884   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:22.271240   41166 cri.go:89] found id: ""
	I1009 18:29:22.271253   41166 logs.go:282] 0 containers: []
	W1009 18:29:22.271259   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:22.271263   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:22.271301   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:22.299626   41166 cri.go:89] found id: ""
	I1009 18:29:22.299641   41166 logs.go:282] 0 containers: []
	W1009 18:29:22.299650   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:22.299656   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:22.299699   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:22.326461   41166 cri.go:89] found id: ""
	I1009 18:29:22.326473   41166 logs.go:282] 0 containers: []
	W1009 18:29:22.326479   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:22.326484   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:22.326526   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:22.352237   41166 cri.go:89] found id: ""
	I1009 18:29:22.352253   41166 logs.go:282] 0 containers: []
	W1009 18:29:22.352264   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:22.352268   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:22.352316   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:22.378255   41166 cri.go:89] found id: ""
	I1009 18:29:22.378268   41166 logs.go:282] 0 containers: []
	W1009 18:29:22.378276   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:22.378297   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:22.378351   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:22.403983   41166 cri.go:89] found id: ""
	I1009 18:29:22.403999   41166 logs.go:282] 0 containers: []
	W1009 18:29:22.404006   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:22.404013   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:22.404024   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:22.470710   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:22.470727   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:22.482584   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:22.482599   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:22.536359   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:22.529981    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.530412    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.531972    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.532353    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.533814    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:22.529981    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.530412    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.531972    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.532353    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.533814    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:22.536380   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:22.536394   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:22.601517   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:22.601533   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:25.128918   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:25.139722   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:25.139766   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:25.165463   41166 cri.go:89] found id: ""
	I1009 18:29:25.165478   41166 logs.go:282] 0 containers: []
	W1009 18:29:25.165486   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:25.165490   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:25.165537   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:25.190387   41166 cri.go:89] found id: ""
	I1009 18:29:25.190400   41166 logs.go:282] 0 containers: []
	W1009 18:29:25.190407   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:25.190411   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:25.190460   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:25.216675   41166 cri.go:89] found id: ""
	I1009 18:29:25.216690   41166 logs.go:282] 0 containers: []
	W1009 18:29:25.216698   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:25.216703   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:25.216747   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:25.242179   41166 cri.go:89] found id: ""
	I1009 18:29:25.242191   41166 logs.go:282] 0 containers: []
	W1009 18:29:25.242197   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:25.242202   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:25.242248   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:25.267486   41166 cri.go:89] found id: ""
	I1009 18:29:25.267502   41166 logs.go:282] 0 containers: []
	W1009 18:29:25.267511   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:25.267517   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:25.267568   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:25.297914   41166 cri.go:89] found id: ""
	I1009 18:29:25.297930   41166 logs.go:282] 0 containers: []
	W1009 18:29:25.297939   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:25.297945   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:25.298000   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:25.328702   41166 cri.go:89] found id: ""
	I1009 18:29:25.328718   41166 logs.go:282] 0 containers: []
	W1009 18:29:25.328727   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:25.328736   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:25.328747   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:25.395115   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:25.395130   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:25.407227   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:25.407245   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:25.462374   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:25.455561    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.456085    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.457650    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.458100    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.459563    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:25.455561    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.456085    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.457650    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.458100    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.459563    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:25.462400   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:25.462410   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:25.525388   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:25.525409   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:28.053225   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:28.063873   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:28.063918   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:28.088014   41166 cri.go:89] found id: ""
	I1009 18:29:28.088030   41166 logs.go:282] 0 containers: []
	W1009 18:29:28.088038   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:28.088045   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:28.088091   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:28.114133   41166 cri.go:89] found id: ""
	I1009 18:29:28.114163   41166 logs.go:282] 0 containers: []
	W1009 18:29:28.114172   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:28.114177   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:28.114221   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:28.138995   41166 cri.go:89] found id: ""
	I1009 18:29:28.139007   41166 logs.go:282] 0 containers: []
	W1009 18:29:28.139017   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:28.139022   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:28.139072   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:28.163909   41166 cri.go:89] found id: ""
	I1009 18:29:28.163925   41166 logs.go:282] 0 containers: []
	W1009 18:29:28.163984   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:28.163991   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:28.164032   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:28.190078   41166 cri.go:89] found id: ""
	I1009 18:29:28.190091   41166 logs.go:282] 0 containers: []
	W1009 18:29:28.190096   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:28.190101   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:28.190171   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:28.215236   41166 cri.go:89] found id: ""
	I1009 18:29:28.215251   41166 logs.go:282] 0 containers: []
	W1009 18:29:28.215260   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:28.215265   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:28.215315   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:28.241659   41166 cri.go:89] found id: ""
	I1009 18:29:28.241675   41166 logs.go:282] 0 containers: []
	W1009 18:29:28.241684   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:28.241692   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:28.241701   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:28.312258   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:28.312275   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:28.323979   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:28.323994   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:28.380524   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:28.373568    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.374186    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.375759    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.376203    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.377825    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:28.373568    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.374186    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.375759    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.376203    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.377825    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:28.380538   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:28.380547   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:28.442571   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:28.442588   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:30.972438   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:30.983019   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:30.983078   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:31.007563   41166 cri.go:89] found id: ""
	I1009 18:29:31.007577   41166 logs.go:282] 0 containers: []
	W1009 18:29:31.007585   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:31.007591   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:31.007665   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:31.033297   41166 cri.go:89] found id: ""
	I1009 18:29:31.033312   41166 logs.go:282] 0 containers: []
	W1009 18:29:31.033320   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:31.033326   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:31.033381   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:31.058733   41166 cri.go:89] found id: ""
	I1009 18:29:31.058748   41166 logs.go:282] 0 containers: []
	W1009 18:29:31.058756   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:31.058761   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:31.058815   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:31.084119   41166 cri.go:89] found id: ""
	I1009 18:29:31.084133   41166 logs.go:282] 0 containers: []
	W1009 18:29:31.084156   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:31.084162   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:31.084206   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:31.109429   41166 cri.go:89] found id: ""
	I1009 18:29:31.109442   41166 logs.go:282] 0 containers: []
	W1009 18:29:31.109448   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:31.109452   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:31.109510   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:31.135299   41166 cri.go:89] found id: ""
	I1009 18:29:31.135312   41166 logs.go:282] 0 containers: []
	W1009 18:29:31.135322   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:31.135328   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:31.135413   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:31.162606   41166 cri.go:89] found id: ""
	I1009 18:29:31.162621   41166 logs.go:282] 0 containers: []
	W1009 18:29:31.162636   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:31.162643   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:31.162652   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:31.230506   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:31.230556   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:31.241809   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:31.241825   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:31.297388   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:31.290563    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.291088    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.292644    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.293059    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.294666    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:31.290563    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.291088    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.292644    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.293059    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.294666    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:31.297398   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:31.297413   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:31.361486   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:31.361502   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:33.891238   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:33.902005   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:33.902060   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:33.927598   41166 cri.go:89] found id: ""
	I1009 18:29:33.927612   41166 logs.go:282] 0 containers: []
	W1009 18:29:33.927618   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:33.927622   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:33.927673   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:33.952038   41166 cri.go:89] found id: ""
	I1009 18:29:33.952053   41166 logs.go:282] 0 containers: []
	W1009 18:29:33.952061   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:33.952066   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:33.952145   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:33.976526   41166 cri.go:89] found id: ""
	I1009 18:29:33.976541   41166 logs.go:282] 0 containers: []
	W1009 18:29:33.976549   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:33.976556   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:33.976610   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:34.003219   41166 cri.go:89] found id: ""
	I1009 18:29:34.003234   41166 logs.go:282] 0 containers: []
	W1009 18:29:34.003242   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:34.003247   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:34.003330   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:34.029762   41166 cri.go:89] found id: ""
	I1009 18:29:34.029775   41166 logs.go:282] 0 containers: []
	W1009 18:29:34.029781   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:34.029785   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:34.029840   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:34.054085   41166 cri.go:89] found id: ""
	I1009 18:29:34.054097   41166 logs.go:282] 0 containers: []
	W1009 18:29:34.054107   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:34.054112   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:34.054179   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:34.080890   41166 cri.go:89] found id: ""
	I1009 18:29:34.080903   41166 logs.go:282] 0 containers: []
	W1009 18:29:34.080909   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:34.080915   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:34.080926   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:34.110411   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:34.110426   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:34.181234   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:34.181254   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:34.192758   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:34.192772   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:34.248477   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:34.241375    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.241950    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.243535    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.244000    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.245566    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:34.241375    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.241950    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.243535    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.244000    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.245566    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:34.248486   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:34.248496   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:36.816158   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:36.827291   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:36.827356   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:36.851760   41166 cri.go:89] found id: ""
	I1009 18:29:36.851775   41166 logs.go:282] 0 containers: []
	W1009 18:29:36.851783   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:36.851789   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:36.851843   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:36.877217   41166 cri.go:89] found id: ""
	I1009 18:29:36.877231   41166 logs.go:282] 0 containers: []
	W1009 18:29:36.877238   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:36.877243   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:36.877284   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:36.902388   41166 cri.go:89] found id: ""
	I1009 18:29:36.902401   41166 logs.go:282] 0 containers: []
	W1009 18:29:36.902407   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:36.902411   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:36.902450   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:36.927658   41166 cri.go:89] found id: ""
	I1009 18:29:36.927673   41166 logs.go:282] 0 containers: []
	W1009 18:29:36.927679   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:36.927683   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:36.927735   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:36.952663   41166 cri.go:89] found id: ""
	I1009 18:29:36.952681   41166 logs.go:282] 0 containers: []
	W1009 18:29:36.952688   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:36.952692   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:36.952731   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:36.977753   41166 cri.go:89] found id: ""
	I1009 18:29:36.977768   41166 logs.go:282] 0 containers: []
	W1009 18:29:36.977774   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:36.977779   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:36.977819   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:37.002782   41166 cri.go:89] found id: ""
	I1009 18:29:37.002796   41166 logs.go:282] 0 containers: []
	W1009 18:29:37.002801   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:37.002807   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:37.002816   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:37.069710   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:37.069726   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:37.081854   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:37.081876   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:37.136826   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:37.130447    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.130883    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.132410    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.132756    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.134175    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:37.130447    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.130883    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.132410    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.132756    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.134175    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:37.136835   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:37.136844   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:37.201251   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:37.201270   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:39.729692   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:39.740542   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:39.740597   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:39.766240   41166 cri.go:89] found id: ""
	I1009 18:29:39.766255   41166 logs.go:282] 0 containers: []
	W1009 18:29:39.766263   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:39.766269   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:39.766330   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:39.792273   41166 cri.go:89] found id: ""
	I1009 18:29:39.792289   41166 logs.go:282] 0 containers: []
	W1009 18:29:39.792298   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:39.792304   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:39.792360   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:39.818498   41166 cri.go:89] found id: ""
	I1009 18:29:39.818513   41166 logs.go:282] 0 containers: []
	W1009 18:29:39.818521   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:39.818526   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:39.818580   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:39.844118   41166 cri.go:89] found id: ""
	I1009 18:29:39.844131   41166 logs.go:282] 0 containers: []
	W1009 18:29:39.844155   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:39.844161   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:39.844204   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:39.870849   41166 cri.go:89] found id: ""
	I1009 18:29:39.870862   41166 logs.go:282] 0 containers: []
	W1009 18:29:39.870868   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:39.870872   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:39.870911   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:39.896931   41166 cri.go:89] found id: ""
	I1009 18:29:39.896944   41166 logs.go:282] 0 containers: []
	W1009 18:29:39.896949   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:39.896954   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:39.896996   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:39.923519   41166 cri.go:89] found id: ""
	I1009 18:29:39.923531   41166 logs.go:282] 0 containers: []
	W1009 18:29:39.923537   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:39.923544   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:39.923553   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:39.990863   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:39.990880   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:40.002519   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:40.002534   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:40.059328   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:40.052153    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.052750    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.054419    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.054856    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.056426    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:40.052153    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.052750    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.054419    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.054856    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.056426    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:40.059339   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:40.059349   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:40.125328   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:40.125345   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:42.656004   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:42.666452   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:42.666495   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:42.691012   41166 cri.go:89] found id: ""
	I1009 18:29:42.691027   41166 logs.go:282] 0 containers: []
	W1009 18:29:42.691037   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:42.691043   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:42.691086   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:42.715311   41166 cri.go:89] found id: ""
	I1009 18:29:42.715327   41166 logs.go:282] 0 containers: []
	W1009 18:29:42.715335   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:42.715346   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:42.715385   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:42.741564   41166 cri.go:89] found id: ""
	I1009 18:29:42.741577   41166 logs.go:282] 0 containers: []
	W1009 18:29:42.741584   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:42.741590   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:42.741639   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:42.765961   41166 cri.go:89] found id: ""
	I1009 18:29:42.765974   41166 logs.go:282] 0 containers: []
	W1009 18:29:42.765980   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:42.765985   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:42.766027   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:42.792117   41166 cri.go:89] found id: ""
	I1009 18:29:42.792129   41166 logs.go:282] 0 containers: []
	W1009 18:29:42.792149   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:42.792155   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:42.792208   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:42.817726   41166 cri.go:89] found id: ""
	I1009 18:29:42.817738   41166 logs.go:282] 0 containers: []
	W1009 18:29:42.817745   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:42.817749   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:42.817799   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:42.842806   41166 cri.go:89] found id: ""
	I1009 18:29:42.842823   41166 logs.go:282] 0 containers: []
	W1009 18:29:42.842829   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:42.842836   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:42.842850   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:42.908734   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:42.908751   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:42.919767   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:42.919780   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:42.975159   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:42.968444    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.969012    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.970635    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.971181    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.972729    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:42.968444    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.969012    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.970635    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.971181    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.972729    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:42.975170   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:42.975181   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:43.041463   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:43.041480   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:45.571837   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:45.582376   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:45.582431   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:45.608198   41166 cri.go:89] found id: ""
	I1009 18:29:45.608211   41166 logs.go:282] 0 containers: []
	W1009 18:29:45.608217   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:45.608221   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:45.608286   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:45.635099   41166 cri.go:89] found id: ""
	I1009 18:29:45.635112   41166 logs.go:282] 0 containers: []
	W1009 18:29:45.635118   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:45.635126   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:45.635182   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:45.660701   41166 cri.go:89] found id: ""
	I1009 18:29:45.660714   41166 logs.go:282] 0 containers: []
	W1009 18:29:45.660720   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:45.660725   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:45.660765   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:45.686907   41166 cri.go:89] found id: ""
	I1009 18:29:45.686920   41166 logs.go:282] 0 containers: []
	W1009 18:29:45.686926   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:45.686931   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:45.686981   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:45.712880   41166 cri.go:89] found id: ""
	I1009 18:29:45.712893   41166 logs.go:282] 0 containers: []
	W1009 18:29:45.712899   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:45.712902   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:45.712941   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:45.738114   41166 cri.go:89] found id: ""
	I1009 18:29:45.738128   41166 logs.go:282] 0 containers: []
	W1009 18:29:45.738147   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:45.738155   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:45.738200   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:45.764157   41166 cri.go:89] found id: ""
	I1009 18:29:45.764172   41166 logs.go:282] 0 containers: []
	W1009 18:29:45.764178   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:45.764187   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:45.764196   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:45.793189   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:45.793204   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:45.861447   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:45.861463   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:45.872975   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:45.872988   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:45.928792   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:45.921633    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.922319    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.923962    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.924449    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.926072    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:45.921633    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.922319    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.923962    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.924449    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.926072    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:45.928810   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:45.928820   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:48.494959   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:48.505724   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:48.505766   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:48.531052   41166 cri.go:89] found id: ""
	I1009 18:29:48.531087   41166 logs.go:282] 0 containers: []
	W1009 18:29:48.531099   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:48.531103   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:48.531167   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:48.555479   41166 cri.go:89] found id: ""
	I1009 18:29:48.555492   41166 logs.go:282] 0 containers: []
	W1009 18:29:48.555498   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:48.555502   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:48.555543   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:48.581427   41166 cri.go:89] found id: ""
	I1009 18:29:48.581444   41166 logs.go:282] 0 containers: []
	W1009 18:29:48.581452   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:48.581460   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:48.581509   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:48.607162   41166 cri.go:89] found id: ""
	I1009 18:29:48.607176   41166 logs.go:282] 0 containers: []
	W1009 18:29:48.607182   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:48.607187   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:48.607235   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:48.632033   41166 cri.go:89] found id: ""
	I1009 18:29:48.632049   41166 logs.go:282] 0 containers: []
	W1009 18:29:48.632058   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:48.632064   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:48.632106   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:48.657205   41166 cri.go:89] found id: ""
	I1009 18:29:48.657218   41166 logs.go:282] 0 containers: []
	W1009 18:29:48.657224   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:48.657229   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:48.657280   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:48.681952   41166 cri.go:89] found id: ""
	I1009 18:29:48.681965   41166 logs.go:282] 0 containers: []
	W1009 18:29:48.681970   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:48.681976   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:48.681986   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:48.751441   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:48.751459   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:48.763252   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:48.763266   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:48.819401   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:48.812637    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.813245    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.814774    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.815273    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.816784    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:48.812637    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.813245    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.814774    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.815273    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.816784    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:48.819413   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:48.819426   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:48.882158   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:48.882176   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:51.412646   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:51.423570   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:51.423613   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:51.450043   41166 cri.go:89] found id: ""
	I1009 18:29:51.450058   41166 logs.go:282] 0 containers: []
	W1009 18:29:51.450076   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:51.450081   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:51.450130   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:51.474654   41166 cri.go:89] found id: ""
	I1009 18:29:51.474669   41166 logs.go:282] 0 containers: []
	W1009 18:29:51.474676   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:51.474683   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:51.474721   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:51.500060   41166 cri.go:89] found id: ""
	I1009 18:29:51.500074   41166 logs.go:282] 0 containers: []
	W1009 18:29:51.500079   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:51.500083   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:51.500125   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:51.525095   41166 cri.go:89] found id: ""
	I1009 18:29:51.525110   41166 logs.go:282] 0 containers: []
	W1009 18:29:51.525117   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:51.525128   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:51.525192   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:51.550903   41166 cri.go:89] found id: ""
	I1009 18:29:51.550915   41166 logs.go:282] 0 containers: []
	W1009 18:29:51.550921   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:51.550925   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:51.550963   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:51.576021   41166 cri.go:89] found id: ""
	I1009 18:29:51.576039   41166 logs.go:282] 0 containers: []
	W1009 18:29:51.576045   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:51.576050   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:51.576101   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:51.601302   41166 cri.go:89] found id: ""
	I1009 18:29:51.601331   41166 logs.go:282] 0 containers: []
	W1009 18:29:51.601337   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:51.601345   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:51.601357   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:51.673218   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:51.673234   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:51.684673   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:51.684688   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:51.740747   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:51.733129    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.733652    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.736069    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.736560    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.738067    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:51.733129    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.733652    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.736069    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.736560    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.738067    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:51.740756   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:51.740765   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:51.804392   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:51.804410   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:54.334647   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:54.345214   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:54.345259   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:54.371054   41166 cri.go:89] found id: ""
	I1009 18:29:54.371070   41166 logs.go:282] 0 containers: []
	W1009 18:29:54.371077   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:54.371081   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:54.371123   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:54.397390   41166 cri.go:89] found id: ""
	I1009 18:29:54.397406   41166 logs.go:282] 0 containers: []
	W1009 18:29:54.397414   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:54.397420   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:54.397469   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:54.423212   41166 cri.go:89] found id: ""
	I1009 18:29:54.423225   41166 logs.go:282] 0 containers: []
	W1009 18:29:54.423231   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:54.423235   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:54.423277   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:54.449723   41166 cri.go:89] found id: ""
	I1009 18:29:54.449738   41166 logs.go:282] 0 containers: []
	W1009 18:29:54.449747   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:54.449753   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:54.449794   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:54.476976   41166 cri.go:89] found id: ""
	I1009 18:29:54.476994   41166 logs.go:282] 0 containers: []
	W1009 18:29:54.476999   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:54.477004   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:54.477056   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:54.502387   41166 cri.go:89] found id: ""
	I1009 18:29:54.502409   41166 logs.go:282] 0 containers: []
	W1009 18:29:54.502419   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:54.502425   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:54.502471   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:54.528021   41166 cri.go:89] found id: ""
	I1009 18:29:54.528037   41166 logs.go:282] 0 containers: []
	W1009 18:29:54.528045   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:54.528053   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:54.528062   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:54.596551   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:54.596569   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:54.607908   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:54.607921   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:54.663274   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:54.655349    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.655928    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.658342    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.658895    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.660440    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:54.655349    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.655928    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.658342    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.658895    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.660440    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:54.663284   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:54.663296   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:54.724548   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:54.724565   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:57.253959   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:57.264749   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:57.264793   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:57.292216   41166 cri.go:89] found id: ""
	I1009 18:29:57.292234   41166 logs.go:282] 0 containers: []
	W1009 18:29:57.292244   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:57.292252   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:57.292322   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:57.320628   41166 cri.go:89] found id: ""
	I1009 18:29:57.320644   41166 logs.go:282] 0 containers: []
	W1009 18:29:57.320657   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:57.320663   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:57.320711   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:57.347524   41166 cri.go:89] found id: ""
	I1009 18:29:57.347541   41166 logs.go:282] 0 containers: []
	W1009 18:29:57.347549   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:57.347555   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:57.347599   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:57.374005   41166 cri.go:89] found id: ""
	I1009 18:29:57.374021   41166 logs.go:282] 0 containers: []
	W1009 18:29:57.374029   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:57.374034   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:57.374080   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:57.398685   41166 cri.go:89] found id: ""
	I1009 18:29:57.398700   41166 logs.go:282] 0 containers: []
	W1009 18:29:57.398706   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:57.398710   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:57.398758   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:57.424224   41166 cri.go:89] found id: ""
	I1009 18:29:57.424237   41166 logs.go:282] 0 containers: []
	W1009 18:29:57.424243   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:57.424247   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:57.424298   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:57.449118   41166 cri.go:89] found id: ""
	I1009 18:29:57.449144   41166 logs.go:282] 0 containers: []
	W1009 18:29:57.449153   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:57.449161   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:57.449170   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:57.477726   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:57.477741   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:57.549189   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:57.549206   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:57.560914   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:57.560933   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:57.615954   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:57.609197    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.609718    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.611273    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.611750    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.613311    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:57.609197    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.609718    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.611273    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.611750    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.613311    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:57.615970   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:57.615980   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:00.177763   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:00.188584   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:00.188628   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:00.214820   41166 cri.go:89] found id: ""
	I1009 18:30:00.214835   41166 logs.go:282] 0 containers: []
	W1009 18:30:00.214844   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:00.214851   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:00.214895   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:00.239376   41166 cri.go:89] found id: ""
	I1009 18:30:00.239393   41166 logs.go:282] 0 containers: []
	W1009 18:30:00.239401   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:00.239407   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:00.239447   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:00.265476   41166 cri.go:89] found id: ""
	I1009 18:30:00.265492   41166 logs.go:282] 0 containers: []
	W1009 18:30:00.265500   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:00.265506   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:00.265556   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:00.291131   41166 cri.go:89] found id: ""
	I1009 18:30:00.291158   41166 logs.go:282] 0 containers: []
	W1009 18:30:00.291167   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:00.291174   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:00.291226   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:00.316623   41166 cri.go:89] found id: ""
	I1009 18:30:00.316636   41166 logs.go:282] 0 containers: []
	W1009 18:30:00.316642   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:00.316646   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:00.316693   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:00.341462   41166 cri.go:89] found id: ""
	I1009 18:30:00.341476   41166 logs.go:282] 0 containers: []
	W1009 18:30:00.341485   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:00.341490   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:00.341531   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:00.366641   41166 cri.go:89] found id: ""
	I1009 18:30:00.366657   41166 logs.go:282] 0 containers: []
	W1009 18:30:00.366663   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:00.366670   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:00.366679   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:00.397505   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:00.397539   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:00.469540   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:00.469557   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:00.481466   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:00.481480   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:00.537449   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:00.530572    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.531116    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.532663    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.533175    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.534723    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:00.530572    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.531116    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.532663    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.533175    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.534723    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:00.537457   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:00.537466   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:03.107457   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:03.117969   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:03.118030   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:03.144661   41166 cri.go:89] found id: ""
	I1009 18:30:03.144676   41166 logs.go:282] 0 containers: []
	W1009 18:30:03.144684   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:03.144689   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:03.144731   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:03.169819   41166 cri.go:89] found id: ""
	I1009 18:30:03.169832   41166 logs.go:282] 0 containers: []
	W1009 18:30:03.169838   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:03.169842   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:03.169880   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:03.195252   41166 cri.go:89] found id: ""
	I1009 18:30:03.195264   41166 logs.go:282] 0 containers: []
	W1009 18:30:03.195271   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:03.195276   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:03.195319   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:03.221154   41166 cri.go:89] found id: ""
	I1009 18:30:03.221169   41166 logs.go:282] 0 containers: []
	W1009 18:30:03.221176   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:03.221181   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:03.221222   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:03.247656   41166 cri.go:89] found id: ""
	I1009 18:30:03.247670   41166 logs.go:282] 0 containers: []
	W1009 18:30:03.247676   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:03.247680   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:03.247736   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:03.273363   41166 cri.go:89] found id: ""
	I1009 18:30:03.273378   41166 logs.go:282] 0 containers: []
	W1009 18:30:03.273386   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:03.273391   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:03.273439   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:03.297383   41166 cri.go:89] found id: ""
	I1009 18:30:03.297399   41166 logs.go:282] 0 containers: []
	W1009 18:30:03.297407   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:03.297415   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:03.297426   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:03.327096   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:03.327110   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:03.396551   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:03.396569   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:03.408005   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:03.408020   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:03.462643   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:03.456283    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.456846    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.458452    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.458867    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.459996    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:03.456283    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.456846    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.458452    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.458867    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.459996    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:03.462656   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:03.462667   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:06.023381   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:06.034110   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:06.034175   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:06.059176   41166 cri.go:89] found id: ""
	I1009 18:30:06.059191   41166 logs.go:282] 0 containers: []
	W1009 18:30:06.059197   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:06.059201   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:06.059261   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:06.085110   41166 cri.go:89] found id: ""
	I1009 18:30:06.085126   41166 logs.go:282] 0 containers: []
	W1009 18:30:06.085146   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:06.085154   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:06.085211   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:06.110722   41166 cri.go:89] found id: ""
	I1009 18:30:06.110738   41166 logs.go:282] 0 containers: []
	W1009 18:30:06.110747   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:06.110753   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:06.110806   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:06.136728   41166 cri.go:89] found id: ""
	I1009 18:30:06.136744   41166 logs.go:282] 0 containers: []
	W1009 18:30:06.136752   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:06.136758   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:06.136815   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:06.162322   41166 cri.go:89] found id: ""
	I1009 18:30:06.162337   41166 logs.go:282] 0 containers: []
	W1009 18:30:06.162345   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:06.162351   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:06.162409   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:06.189203   41166 cri.go:89] found id: ""
	I1009 18:30:06.189217   41166 logs.go:282] 0 containers: []
	W1009 18:30:06.189225   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:06.189230   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:06.189374   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:06.215767   41166 cri.go:89] found id: ""
	I1009 18:30:06.215781   41166 logs.go:282] 0 containers: []
	W1009 18:30:06.215790   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:06.215798   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:06.215811   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:06.286131   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:06.286154   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:06.297884   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:06.297899   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:06.354614   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:06.347511    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.348070    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.349662    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.350175    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.351714    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:06.347511    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.348070    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.349662    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.350175    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.351714    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:06.354625   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:06.354634   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:06.421245   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:06.421263   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:08.950561   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:08.961412   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:08.961461   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:08.985056   41166 cri.go:89] found id: ""
	I1009 18:30:08.985073   41166 logs.go:282] 0 containers: []
	W1009 18:30:08.985081   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:08.985086   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:08.985155   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:09.010161   41166 cri.go:89] found id: ""
	I1009 18:30:09.010177   41166 logs.go:282] 0 containers: []
	W1009 18:30:09.010185   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:09.010190   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:09.010240   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:09.035006   41166 cri.go:89] found id: ""
	I1009 18:30:09.035021   41166 logs.go:282] 0 containers: []
	W1009 18:30:09.035030   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:09.035035   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:09.035079   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:09.059807   41166 cri.go:89] found id: ""
	I1009 18:30:09.059822   41166 logs.go:282] 0 containers: []
	W1009 18:30:09.059831   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:09.059836   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:09.059877   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:09.085467   41166 cri.go:89] found id: ""
	I1009 18:30:09.085482   41166 logs.go:282] 0 containers: []
	W1009 18:30:09.085490   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:09.085495   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:09.085536   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:09.110808   41166 cri.go:89] found id: ""
	I1009 18:30:09.110821   41166 logs.go:282] 0 containers: []
	W1009 18:30:09.110826   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:09.110831   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:09.110869   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:09.135842   41166 cri.go:89] found id: ""
	I1009 18:30:09.135854   41166 logs.go:282] 0 containers: []
	W1009 18:30:09.135860   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:09.135867   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:09.135875   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:09.195931   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:09.195948   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:09.225362   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:09.225375   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:09.296888   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:09.296905   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:09.309206   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:09.309223   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:09.365940   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:09.358751    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.359361    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.360926    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.361520    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.363120    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:09.358751    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.359361    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.360926    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.361520    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.363120    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:11.867608   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:11.878320   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:11.878362   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:11.904080   41166 cri.go:89] found id: ""
	I1009 18:30:11.904094   41166 logs.go:282] 0 containers: []
	W1009 18:30:11.904103   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:11.904109   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:11.904175   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:11.930291   41166 cri.go:89] found id: ""
	I1009 18:30:11.930308   41166 logs.go:282] 0 containers: []
	W1009 18:30:11.930327   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:11.930332   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:11.930372   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:11.955946   41166 cri.go:89] found id: ""
	I1009 18:30:11.955959   41166 logs.go:282] 0 containers: []
	W1009 18:30:11.955965   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:11.955970   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:11.956022   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:11.981169   41166 cri.go:89] found id: ""
	I1009 18:30:11.981184   41166 logs.go:282] 0 containers: []
	W1009 18:30:11.981190   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:11.981197   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:11.981254   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:12.006868   41166 cri.go:89] found id: ""
	I1009 18:30:12.006882   41166 logs.go:282] 0 containers: []
	W1009 18:30:12.006890   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:12.006896   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:12.006950   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:12.033045   41166 cri.go:89] found id: ""
	I1009 18:30:12.033062   41166 logs.go:282] 0 containers: []
	W1009 18:30:12.033070   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:12.033076   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:12.033123   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:12.059215   41166 cri.go:89] found id: ""
	I1009 18:30:12.059228   41166 logs.go:282] 0 containers: []
	W1009 18:30:12.059233   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:12.059240   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:12.059249   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:12.088610   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:12.088630   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:12.156730   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:12.156750   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:12.168340   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:12.168354   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:12.224955   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:12.217733    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.218350    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.220045    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.220517    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.222048    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:12.217733    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.218350    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.220045    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.220517    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.222048    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:12.224965   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:12.224974   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:14.790502   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:14.801228   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:14.801285   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:14.828449   41166 cri.go:89] found id: ""
	I1009 18:30:14.828469   41166 logs.go:282] 0 containers: []
	W1009 18:30:14.828478   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:14.828486   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:14.828539   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:14.854655   41166 cri.go:89] found id: ""
	I1009 18:30:14.854672   41166 logs.go:282] 0 containers: []
	W1009 18:30:14.854681   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:14.854687   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:14.854730   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:14.880081   41166 cri.go:89] found id: ""
	I1009 18:30:14.880103   41166 logs.go:282] 0 containers: []
	W1009 18:30:14.880110   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:14.880119   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:14.880182   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:14.906543   41166 cri.go:89] found id: ""
	I1009 18:30:14.906556   41166 logs.go:282] 0 containers: []
	W1009 18:30:14.906562   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:14.906567   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:14.906607   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:14.932338   41166 cri.go:89] found id: ""
	I1009 18:30:14.932354   41166 logs.go:282] 0 containers: []
	W1009 18:30:14.932360   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:14.932365   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:14.932417   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:14.959648   41166 cri.go:89] found id: ""
	I1009 18:30:14.959661   41166 logs.go:282] 0 containers: []
	W1009 18:30:14.959666   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:14.959670   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:14.959722   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:14.985626   41166 cri.go:89] found id: ""
	I1009 18:30:14.985642   41166 logs.go:282] 0 containers: []
	W1009 18:30:14.985651   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:14.985657   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:14.985667   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:15.059129   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:15.059150   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:15.070684   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:15.070698   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:15.127441   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:15.120544    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.121101    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.122649    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.123113    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.124615    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:15.120544    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.121101    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.122649    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.123113    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.124615    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:15.127451   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:15.127462   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:15.188736   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:15.188755   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:17.720548   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:17.731158   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:17.731199   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:17.756463   41166 cri.go:89] found id: ""
	I1009 18:30:17.756478   41166 logs.go:282] 0 containers: []
	W1009 18:30:17.756485   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:17.756489   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:17.756532   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:17.780776   41166 cri.go:89] found id: ""
	I1009 18:30:17.780792   41166 logs.go:282] 0 containers: []
	W1009 18:30:17.780799   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:17.780804   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:17.780845   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:17.805635   41166 cri.go:89] found id: ""
	I1009 18:30:17.805648   41166 logs.go:282] 0 containers: []
	W1009 18:30:17.805654   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:17.805658   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:17.805700   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:17.832060   41166 cri.go:89] found id: ""
	I1009 18:30:17.832074   41166 logs.go:282] 0 containers: []
	W1009 18:30:17.832079   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:17.832084   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:17.832125   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:17.859215   41166 cri.go:89] found id: ""
	I1009 18:30:17.859231   41166 logs.go:282] 0 containers: []
	W1009 18:30:17.859240   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:17.859248   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:17.859299   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:17.884007   41166 cri.go:89] found id: ""
	I1009 18:30:17.884021   41166 logs.go:282] 0 containers: []
	W1009 18:30:17.884027   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:17.884031   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:17.884073   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:17.908524   41166 cri.go:89] found id: ""
	I1009 18:30:17.908537   41166 logs.go:282] 0 containers: []
	W1009 18:30:17.908543   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:17.908550   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:17.908559   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:17.974071   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:17.974088   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:17.985794   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:17.985809   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:18.042658   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:18.035698    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.036247    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.037804    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.038378    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.039940    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:18.035698    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.036247    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.037804    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.038378    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.039940    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:18.042678   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:18.042688   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:18.104183   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:18.104201   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:20.634002   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:20.645000   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:20.645074   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:20.671295   41166 cri.go:89] found id: ""
	I1009 18:30:20.671309   41166 logs.go:282] 0 containers: []
	W1009 18:30:20.671320   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:20.671325   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:20.671370   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:20.699380   41166 cri.go:89] found id: ""
	I1009 18:30:20.699393   41166 logs.go:282] 0 containers: []
	W1009 18:30:20.699399   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:20.699404   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:20.699508   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:20.728459   41166 cri.go:89] found id: ""
	I1009 18:30:20.728483   41166 logs.go:282] 0 containers: []
	W1009 18:30:20.728490   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:20.728502   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:20.728571   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:20.755606   41166 cri.go:89] found id: ""
	I1009 18:30:20.755626   41166 logs.go:282] 0 containers: []
	W1009 18:30:20.755637   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:20.755643   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:20.755704   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:20.783272   41166 cri.go:89] found id: ""
	I1009 18:30:20.783285   41166 logs.go:282] 0 containers: []
	W1009 18:30:20.783291   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:20.783295   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:20.783338   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:20.810985   41166 cri.go:89] found id: ""
	I1009 18:30:20.810998   41166 logs.go:282] 0 containers: []
	W1009 18:30:20.811005   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:20.811009   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:20.811090   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:20.838557   41166 cri.go:89] found id: ""
	I1009 18:30:20.838573   41166 logs.go:282] 0 containers: []
	W1009 18:30:20.838580   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:20.838588   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:20.838597   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:20.868656   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:20.868669   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:20.940019   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:20.940041   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:20.952293   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:20.952307   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:21.010202   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:21.003172    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.003783    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.005520    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.006014    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.007633    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:21.003172    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.003783    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.005520    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.006014    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.007633    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:21.010215   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:21.010228   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:23.575003   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:23.585670   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:23.585721   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:23.611187   41166 cri.go:89] found id: ""
	I1009 18:30:23.611202   41166 logs.go:282] 0 containers: []
	W1009 18:30:23.611208   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:23.611216   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:23.611267   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:23.636952   41166 cri.go:89] found id: ""
	I1009 18:30:23.636966   41166 logs.go:282] 0 containers: []
	W1009 18:30:23.636972   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:23.636977   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:23.637018   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:23.661266   41166 cri.go:89] found id: ""
	I1009 18:30:23.661282   41166 logs.go:282] 0 containers: []
	W1009 18:30:23.661289   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:23.661294   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:23.661343   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:23.687560   41166 cri.go:89] found id: ""
	I1009 18:30:23.687573   41166 logs.go:282] 0 containers: []
	W1009 18:30:23.687578   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:23.687583   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:23.687637   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:23.712015   41166 cri.go:89] found id: ""
	I1009 18:30:23.712031   41166 logs.go:282] 0 containers: []
	W1009 18:30:23.712040   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:23.712046   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:23.712103   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:23.738106   41166 cri.go:89] found id: ""
	I1009 18:30:23.738120   41166 logs.go:282] 0 containers: []
	W1009 18:30:23.738126   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:23.738130   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:23.738191   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:23.764275   41166 cri.go:89] found id: ""
	I1009 18:30:23.764288   41166 logs.go:282] 0 containers: []
	W1009 18:30:23.764307   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:23.764314   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:23.764322   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:23.775354   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:23.775367   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:23.831862   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:23.824872    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.825499    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.827105    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.827605    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.829326    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:23.824872    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.825499    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.827105    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.827605    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.829326    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:23.831884   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:23.831893   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:23.894598   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:23.894614   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:23.922715   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:23.922731   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:26.494758   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:26.505984   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:26.506076   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:26.532013   41166 cri.go:89] found id: ""
	I1009 18:30:26.532029   41166 logs.go:282] 0 containers: []
	W1009 18:30:26.532037   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:26.532042   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:26.532088   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:26.558247   41166 cri.go:89] found id: ""
	I1009 18:30:26.558278   41166 logs.go:282] 0 containers: []
	W1009 18:30:26.558286   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:26.558290   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:26.558335   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:26.583466   41166 cri.go:89] found id: ""
	I1009 18:30:26.583479   41166 logs.go:282] 0 containers: []
	W1009 18:30:26.583485   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:26.583495   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:26.583536   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:26.611101   41166 cri.go:89] found id: ""
	I1009 18:30:26.611114   41166 logs.go:282] 0 containers: []
	W1009 18:30:26.611126   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:26.611131   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:26.611199   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:26.636533   41166 cri.go:89] found id: ""
	I1009 18:30:26.636547   41166 logs.go:282] 0 containers: []
	W1009 18:30:26.636553   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:26.636557   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:26.636594   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:26.661023   41166 cri.go:89] found id: ""
	I1009 18:30:26.661039   41166 logs.go:282] 0 containers: []
	W1009 18:30:26.661048   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:26.661055   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:26.661103   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:26.686499   41166 cri.go:89] found id: ""
	I1009 18:30:26.686511   41166 logs.go:282] 0 containers: []
	W1009 18:30:26.686518   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:26.686524   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:26.686533   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:26.750968   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:26.750986   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:26.762679   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:26.762697   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:26.819065   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:26.812332    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.812909    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.814580    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.815057    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.816557    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:26.812332    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.812909    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.814580    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.815057    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.816557    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:26.819088   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:26.819097   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:26.882784   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:26.882801   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:29.411957   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:29.422542   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:29.422590   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:29.448891   41166 cri.go:89] found id: ""
	I1009 18:30:29.448907   41166 logs.go:282] 0 containers: []
	W1009 18:30:29.448916   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:29.448921   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:29.448968   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:29.474806   41166 cri.go:89] found id: ""
	I1009 18:30:29.474823   41166 logs.go:282] 0 containers: []
	W1009 18:30:29.474829   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:29.474834   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:29.474875   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:29.501280   41166 cri.go:89] found id: ""
	I1009 18:30:29.501293   41166 logs.go:282] 0 containers: []
	W1009 18:30:29.501299   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:29.501303   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:29.501344   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:29.528191   41166 cri.go:89] found id: ""
	I1009 18:30:29.528204   41166 logs.go:282] 0 containers: []
	W1009 18:30:29.528210   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:29.528214   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:29.528253   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:29.554786   41166 cri.go:89] found id: ""
	I1009 18:30:29.554799   41166 logs.go:282] 0 containers: []
	W1009 18:30:29.554806   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:29.554811   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:29.554853   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:29.579893   41166 cri.go:89] found id: ""
	I1009 18:30:29.579909   41166 logs.go:282] 0 containers: []
	W1009 18:30:29.579918   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:29.579922   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:29.579965   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:29.605961   41166 cri.go:89] found id: ""
	I1009 18:30:29.605974   41166 logs.go:282] 0 containers: []
	W1009 18:30:29.605983   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:29.605998   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:29.606010   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:29.667811   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:29.667839   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:29.697600   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:29.697622   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:29.767295   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:29.767316   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:29.779348   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:29.779365   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:29.835961   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:29.829223    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.829767    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.831335    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.831758    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.833341    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:29.829223    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.829767    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.831335    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.831758    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.833341    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:32.337665   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:32.348466   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:32.348524   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:32.374886   41166 cri.go:89] found id: ""
	I1009 18:30:32.374904   41166 logs.go:282] 0 containers: []
	W1009 18:30:32.374914   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:32.374922   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:32.374970   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:32.400529   41166 cri.go:89] found id: ""
	I1009 18:30:32.400545   41166 logs.go:282] 0 containers: []
	W1009 18:30:32.400554   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:32.400560   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:32.400613   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:32.426791   41166 cri.go:89] found id: ""
	I1009 18:30:32.426807   41166 logs.go:282] 0 containers: []
	W1009 18:30:32.426812   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:32.426817   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:32.426857   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:32.452312   41166 cri.go:89] found id: ""
	I1009 18:30:32.452327   41166 logs.go:282] 0 containers: []
	W1009 18:30:32.452332   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:32.452337   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:32.452418   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:32.477378   41166 cri.go:89] found id: ""
	I1009 18:30:32.477392   41166 logs.go:282] 0 containers: []
	W1009 18:30:32.477398   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:32.477402   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:32.477445   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:32.503118   41166 cri.go:89] found id: ""
	I1009 18:30:32.503131   41166 logs.go:282] 0 containers: []
	W1009 18:30:32.503154   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:32.503161   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:32.503204   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:32.528118   41166 cri.go:89] found id: ""
	I1009 18:30:32.528132   41166 logs.go:282] 0 containers: []
	W1009 18:30:32.528156   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:32.528165   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:32.528175   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:32.591877   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:32.591893   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:32.603816   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:32.603831   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:32.660681   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:32.653480    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.654399    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.655963    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.656383    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.657937    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:32.653480    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.654399    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.655963    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.656383    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.657937    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:32.660698   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:32.660707   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:32.720544   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:32.720563   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:35.252168   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:35.262910   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:35.262957   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:35.288174   41166 cri.go:89] found id: ""
	I1009 18:30:35.288191   41166 logs.go:282] 0 containers: []
	W1009 18:30:35.288199   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:35.288205   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:35.288262   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:35.313498   41166 cri.go:89] found id: ""
	I1009 18:30:35.313515   41166 logs.go:282] 0 containers: []
	W1009 18:30:35.313523   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:35.313529   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:35.313576   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:35.337926   41166 cri.go:89] found id: ""
	I1009 18:30:35.337942   41166 logs.go:282] 0 containers: []
	W1009 18:30:35.337950   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:35.337956   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:35.337998   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:35.364071   41166 cri.go:89] found id: ""
	I1009 18:30:35.364085   41166 logs.go:282] 0 containers: []
	W1009 18:30:35.364093   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:35.364100   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:35.364185   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:35.390353   41166 cri.go:89] found id: ""
	I1009 18:30:35.390367   41166 logs.go:282] 0 containers: []
	W1009 18:30:35.390373   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:35.390378   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:35.390419   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:35.416164   41166 cri.go:89] found id: ""
	I1009 18:30:35.416179   41166 logs.go:282] 0 containers: []
	W1009 18:30:35.416185   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:35.416190   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:35.416230   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:35.442115   41166 cri.go:89] found id: ""
	I1009 18:30:35.442131   41166 logs.go:282] 0 containers: []
	W1009 18:30:35.442152   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:35.442161   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:35.442172   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:35.512407   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:35.512424   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:35.524233   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:35.524246   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:35.581940   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:35.574890    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.575447    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.577004    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.577533    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.579108    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:35.574890    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.575447    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.577004    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.577533    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.579108    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:35.581954   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:35.581963   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:35.645796   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:35.645815   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:38.176188   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:38.187286   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:38.187337   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:38.213431   41166 cri.go:89] found id: ""
	I1009 18:30:38.213447   41166 logs.go:282] 0 containers: []
	W1009 18:30:38.213454   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:38.213458   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:38.213506   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:38.239289   41166 cri.go:89] found id: ""
	I1009 18:30:38.239305   41166 logs.go:282] 0 containers: []
	W1009 18:30:38.239313   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:38.239322   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:38.239375   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:38.266575   41166 cri.go:89] found id: ""
	I1009 18:30:38.266590   41166 logs.go:282] 0 containers: []
	W1009 18:30:38.266599   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:38.266604   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:38.266659   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:38.293047   41166 cri.go:89] found id: ""
	I1009 18:30:38.293062   41166 logs.go:282] 0 containers: []
	W1009 18:30:38.293071   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:38.293077   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:38.293132   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:38.321467   41166 cri.go:89] found id: ""
	I1009 18:30:38.321483   41166 logs.go:282] 0 containers: []
	W1009 18:30:38.321497   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:38.321503   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:38.321550   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:38.348227   41166 cri.go:89] found id: ""
	I1009 18:30:38.348251   41166 logs.go:282] 0 containers: []
	W1009 18:30:38.348259   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:38.348263   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:38.348306   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:38.374014   41166 cri.go:89] found id: ""
	I1009 18:30:38.374027   41166 logs.go:282] 0 containers: []
	W1009 18:30:38.374033   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:38.374039   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:38.374049   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:38.402788   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:38.402802   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:38.467775   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:38.467793   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:38.479120   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:38.479133   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:38.534788   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:38.527716   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.528266   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.529835   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.530310   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.531921   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:38.527716   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.528266   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.529835   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.530310   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.531921   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:38.534798   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:38.534808   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:41.097400   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:41.108281   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:41.108326   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:41.134432   41166 cri.go:89] found id: ""
	I1009 18:30:41.134448   41166 logs.go:282] 0 containers: []
	W1009 18:30:41.134456   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:41.134461   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:41.134502   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:41.160000   41166 cri.go:89] found id: ""
	I1009 18:30:41.160045   41166 logs.go:282] 0 containers: []
	W1009 18:30:41.160055   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:41.160071   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:41.160116   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:41.185957   41166 cri.go:89] found id: ""
	I1009 18:30:41.185971   41166 logs.go:282] 0 containers: []
	W1009 18:30:41.185979   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:41.185985   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:41.186046   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:41.212581   41166 cri.go:89] found id: ""
	I1009 18:30:41.212595   41166 logs.go:282] 0 containers: []
	W1009 18:30:41.212604   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:41.212611   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:41.212664   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:41.239537   41166 cri.go:89] found id: ""
	I1009 18:30:41.239550   41166 logs.go:282] 0 containers: []
	W1009 18:30:41.239556   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:41.239560   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:41.239603   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:41.264876   41166 cri.go:89] found id: ""
	I1009 18:30:41.264891   41166 logs.go:282] 0 containers: []
	W1009 18:30:41.264906   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:41.264915   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:41.264961   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:41.293949   41166 cri.go:89] found id: ""
	I1009 18:30:41.293962   41166 logs.go:282] 0 containers: []
	W1009 18:30:41.293968   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:41.293975   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:41.293985   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:41.306008   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:41.306023   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:41.363715   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:41.356554   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.357179   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.358764   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.359246   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.361018   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:41.356554   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.357179   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.358764   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.359246   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.361018   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:41.363727   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:41.363736   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:41.427974   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:41.427993   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:41.457063   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:41.457080   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:44.027395   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:44.038545   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:44.038600   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:44.065345   41166 cri.go:89] found id: ""
	I1009 18:30:44.065358   41166 logs.go:282] 0 containers: []
	W1009 18:30:44.065364   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:44.065369   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:44.065418   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:44.092543   41166 cri.go:89] found id: ""
	I1009 18:30:44.092558   41166 logs.go:282] 0 containers: []
	W1009 18:30:44.092572   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:44.092578   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:44.092628   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:44.117582   41166 cri.go:89] found id: ""
	I1009 18:30:44.117598   41166 logs.go:282] 0 containers: []
	W1009 18:30:44.117606   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:44.117612   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:44.117663   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:44.144537   41166 cri.go:89] found id: ""
	I1009 18:30:44.144554   41166 logs.go:282] 0 containers: []
	W1009 18:30:44.144563   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:44.144569   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:44.144630   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:44.170004   41166 cri.go:89] found id: ""
	I1009 18:30:44.170020   41166 logs.go:282] 0 containers: []
	W1009 18:30:44.170027   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:44.170032   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:44.170085   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:44.195566   41166 cri.go:89] found id: ""
	I1009 18:30:44.195581   41166 logs.go:282] 0 containers: []
	W1009 18:30:44.195587   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:44.195591   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:44.195638   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:44.221237   41166 cri.go:89] found id: ""
	I1009 18:30:44.221250   41166 logs.go:282] 0 containers: []
	W1009 18:30:44.221256   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:44.221264   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:44.221273   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:44.290040   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:44.290059   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:44.301528   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:44.301543   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:44.356883   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:44.350018   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.350577   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.352116   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.352527   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.353985   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:44.350018   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.350577   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.352116   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.352527   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.353985   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:44.356892   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:44.356904   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:44.421203   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:44.421220   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:46.952072   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:46.962761   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:46.962852   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:46.988381   41166 cri.go:89] found id: ""
	I1009 18:30:46.988395   41166 logs.go:282] 0 containers: []
	W1009 18:30:46.988401   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:46.988406   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:46.988447   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:47.014123   41166 cri.go:89] found id: ""
	I1009 18:30:47.014151   41166 logs.go:282] 0 containers: []
	W1009 18:30:47.014161   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:47.014167   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:47.014223   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:47.040379   41166 cri.go:89] found id: ""
	I1009 18:30:47.040395   41166 logs.go:282] 0 containers: []
	W1009 18:30:47.040403   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:47.040409   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:47.040460   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:47.066430   41166 cri.go:89] found id: ""
	I1009 18:30:47.066444   41166 logs.go:282] 0 containers: []
	W1009 18:30:47.066450   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:47.066454   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:47.066495   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:47.092458   41166 cri.go:89] found id: ""
	I1009 18:30:47.092471   41166 logs.go:282] 0 containers: []
	W1009 18:30:47.092476   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:47.092481   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:47.092522   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:47.118558   41166 cri.go:89] found id: ""
	I1009 18:30:47.118574   41166 logs.go:282] 0 containers: []
	W1009 18:30:47.118582   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:47.118588   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:47.118639   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:47.143956   41166 cri.go:89] found id: ""
	I1009 18:30:47.143969   41166 logs.go:282] 0 containers: []
	W1009 18:30:47.143975   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:47.143983   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:47.143991   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:47.204921   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:47.204939   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:47.233955   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:47.233972   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:47.299659   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:47.299725   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:47.310930   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:47.310944   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:47.365782   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:47.358862   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.359473   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.361059   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.361558   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.363067   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:47.358862   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.359473   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.361059   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.361558   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.363067   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:49.866821   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:49.877492   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:49.877546   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:49.902235   41166 cri.go:89] found id: ""
	I1009 18:30:49.902249   41166 logs.go:282] 0 containers: []
	W1009 18:30:49.902255   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:49.902260   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:49.902330   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:49.927833   41166 cri.go:89] found id: ""
	I1009 18:30:49.927848   41166 logs.go:282] 0 containers: []
	W1009 18:30:49.927855   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:49.927859   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:49.927914   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:49.952484   41166 cri.go:89] found id: ""
	I1009 18:30:49.952500   41166 logs.go:282] 0 containers: []
	W1009 18:30:49.952515   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:49.952525   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:49.952653   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:49.978974   41166 cri.go:89] found id: ""
	I1009 18:30:49.978989   41166 logs.go:282] 0 containers: []
	W1009 18:30:49.978997   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:49.979003   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:49.979055   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:50.003996   41166 cri.go:89] found id: ""
	I1009 18:30:50.004011   41166 logs.go:282] 0 containers: []
	W1009 18:30:50.004020   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:50.004026   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:50.004074   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:50.029201   41166 cri.go:89] found id: ""
	I1009 18:30:50.029213   41166 logs.go:282] 0 containers: []
	W1009 18:30:50.029220   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:50.029225   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:50.029285   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:50.055190   41166 cri.go:89] found id: ""
	I1009 18:30:50.055203   41166 logs.go:282] 0 containers: []
	W1009 18:30:50.055208   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:50.055215   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:50.055224   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:50.124075   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:50.124092   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:50.135918   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:50.135933   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:50.192425   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:50.185538   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.186038   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.187643   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.188060   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.189680   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:50.185538   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.186038   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.187643   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.188060   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.189680   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:50.192437   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:50.192450   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:50.252346   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:50.252364   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:52.781770   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:52.792376   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:52.792418   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:52.818902   41166 cri.go:89] found id: ""
	I1009 18:30:52.818916   41166 logs.go:282] 0 containers: []
	W1009 18:30:52.818922   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:52.818941   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:52.818984   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:52.844120   41166 cri.go:89] found id: ""
	I1009 18:30:52.844145   41166 logs.go:282] 0 containers: []
	W1009 18:30:52.844154   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:52.844160   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:52.844205   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:52.870228   41166 cri.go:89] found id: ""
	I1009 18:30:52.870242   41166 logs.go:282] 0 containers: []
	W1009 18:30:52.870254   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:52.870259   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:52.870305   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:52.896056   41166 cri.go:89] found id: ""
	I1009 18:30:52.896073   41166 logs.go:282] 0 containers: []
	W1009 18:30:52.896082   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:52.896089   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:52.896151   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:52.921111   41166 cri.go:89] found id: ""
	I1009 18:30:52.921126   41166 logs.go:282] 0 containers: []
	W1009 18:30:52.921145   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:52.921152   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:52.921198   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:52.947164   41166 cri.go:89] found id: ""
	I1009 18:30:52.947180   41166 logs.go:282] 0 containers: []
	W1009 18:30:52.947189   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:52.947194   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:52.947246   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:52.972398   41166 cri.go:89] found id: ""
	I1009 18:30:52.972412   41166 logs.go:282] 0 containers: []
	W1009 18:30:52.972419   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:52.972426   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:52.972441   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:53.041501   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:53.041519   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:53.053308   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:53.053324   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:53.109333   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:53.102407   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.102951   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.104551   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.104933   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.106568   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:53.102407   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.102951   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.104551   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.104933   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.106568   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:53.109342   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:53.109351   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:53.168700   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:53.168718   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:55.699434   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:55.709814   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:55.709854   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:55.734822   41166 cri.go:89] found id: ""
	I1009 18:30:55.734841   41166 logs.go:282] 0 containers: []
	W1009 18:30:55.734851   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:55.734858   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:55.734916   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:55.759667   41166 cri.go:89] found id: ""
	I1009 18:30:55.759684   41166 logs.go:282] 0 containers: []
	W1009 18:30:55.759692   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:55.759698   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:55.759750   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:55.785789   41166 cri.go:89] found id: ""
	I1009 18:30:55.785805   41166 logs.go:282] 0 containers: []
	W1009 18:30:55.785813   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:55.785819   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:55.785872   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:55.810465   41166 cri.go:89] found id: ""
	I1009 18:30:55.810481   41166 logs.go:282] 0 containers: []
	W1009 18:30:55.810490   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:55.810496   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:55.810537   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:55.836067   41166 cri.go:89] found id: ""
	I1009 18:30:55.836080   41166 logs.go:282] 0 containers: []
	W1009 18:30:55.836086   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:55.836091   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:55.836131   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:55.860951   41166 cri.go:89] found id: ""
	I1009 18:30:55.860967   41166 logs.go:282] 0 containers: []
	W1009 18:30:55.860974   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:55.860978   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:55.861021   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:55.885761   41166 cri.go:89] found id: ""
	I1009 18:30:55.885775   41166 logs.go:282] 0 containers: []
	W1009 18:30:55.885781   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:55.885788   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:55.885797   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:55.915265   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:55.915280   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:55.981115   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:55.981146   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:55.993311   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:55.993328   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:56.050751   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:56.043889   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.044374   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.045969   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.046413   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.047907   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:56.043889   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.044374   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.045969   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.046413   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.047907   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:56.050764   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:56.050774   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:58.612432   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:58.623245   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:58.623295   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:58.648116   41166 cri.go:89] found id: ""
	I1009 18:30:58.648129   41166 logs.go:282] 0 containers: []
	W1009 18:30:58.648149   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:58.648156   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:58.648209   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:58.674600   41166 cri.go:89] found id: ""
	I1009 18:30:58.674619   41166 logs.go:282] 0 containers: []
	W1009 18:30:58.674627   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:58.674634   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:58.674700   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:58.700636   41166 cri.go:89] found id: ""
	I1009 18:30:58.700649   41166 logs.go:282] 0 containers: []
	W1009 18:30:58.700655   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:58.700659   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:58.700701   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:58.725891   41166 cri.go:89] found id: ""
	I1009 18:30:58.725907   41166 logs.go:282] 0 containers: []
	W1009 18:30:58.725916   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:58.725922   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:58.725984   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:58.751493   41166 cri.go:89] found id: ""
	I1009 18:30:58.751509   41166 logs.go:282] 0 containers: []
	W1009 18:30:58.751517   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:58.751523   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:58.751565   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:58.776578   41166 cri.go:89] found id: ""
	I1009 18:30:58.776594   41166 logs.go:282] 0 containers: []
	W1009 18:30:58.776603   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:58.776609   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:58.776668   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:58.802746   41166 cri.go:89] found id: ""
	I1009 18:30:58.802759   41166 logs.go:282] 0 containers: []
	W1009 18:30:58.802765   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:58.802772   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:58.802780   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:58.871392   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:58.871409   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:58.883200   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:58.883216   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:58.939993   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:58.932935   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.933540   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.935122   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.935618   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.937106   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:58.932935   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.933540   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.935122   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.935618   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.937106   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:58.940010   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:58.940026   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:59.001043   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:59.001062   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:01.533754   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:01.544314   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:01.544360   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:01.570557   41166 cri.go:89] found id: ""
	I1009 18:31:01.570573   41166 logs.go:282] 0 containers: []
	W1009 18:31:01.570581   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:01.570587   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:01.570633   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:01.597498   41166 cri.go:89] found id: ""
	I1009 18:31:01.597512   41166 logs.go:282] 0 containers: []
	W1009 18:31:01.597518   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:01.597522   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:01.597562   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:01.624834   41166 cri.go:89] found id: ""
	I1009 18:31:01.624850   41166 logs.go:282] 0 containers: []
	W1009 18:31:01.624859   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:01.624865   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:01.624928   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:01.650834   41166 cri.go:89] found id: ""
	I1009 18:31:01.650849   41166 logs.go:282] 0 containers: []
	W1009 18:31:01.650858   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:01.650864   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:01.650902   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:01.676498   41166 cri.go:89] found id: ""
	I1009 18:31:01.676513   41166 logs.go:282] 0 containers: []
	W1009 18:31:01.676522   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:01.676530   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:01.676575   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:01.702274   41166 cri.go:89] found id: ""
	I1009 18:31:01.702288   41166 logs.go:282] 0 containers: []
	W1009 18:31:01.702299   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:01.702304   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:01.702359   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:01.727077   41166 cri.go:89] found id: ""
	I1009 18:31:01.727089   41166 logs.go:282] 0 containers: []
	W1009 18:31:01.727095   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:01.727102   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:01.727110   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:01.794867   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:01.794884   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:01.807132   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:01.807156   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:01.863186   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:01.856581   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.857195   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.858743   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.859211   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.860783   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:01.856581   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.857195   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.858743   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.859211   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.860783   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:01.863194   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:01.863203   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:01.926319   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:01.926337   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:04.456429   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:04.467647   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:04.467697   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:04.494363   41166 cri.go:89] found id: ""
	I1009 18:31:04.494376   41166 logs.go:282] 0 containers: []
	W1009 18:31:04.494382   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:04.494386   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:04.494425   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:04.519597   41166 cri.go:89] found id: ""
	I1009 18:31:04.519613   41166 logs.go:282] 0 containers: []
	W1009 18:31:04.519622   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:04.519627   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:04.519673   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:04.544960   41166 cri.go:89] found id: ""
	I1009 18:31:04.544973   41166 logs.go:282] 0 containers: []
	W1009 18:31:04.544979   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:04.544983   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:04.545025   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:04.570312   41166 cri.go:89] found id: ""
	I1009 18:31:04.570326   41166 logs.go:282] 0 containers: []
	W1009 18:31:04.570331   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:04.570336   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:04.570376   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:04.598075   41166 cri.go:89] found id: ""
	I1009 18:31:04.598088   41166 logs.go:282] 0 containers: []
	W1009 18:31:04.598094   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:04.598098   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:04.598163   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:04.624439   41166 cri.go:89] found id: ""
	I1009 18:31:04.624452   41166 logs.go:282] 0 containers: []
	W1009 18:31:04.624458   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:04.624462   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:04.624501   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:04.650512   41166 cri.go:89] found id: ""
	I1009 18:31:04.650526   41166 logs.go:282] 0 containers: []
	W1009 18:31:04.650535   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:04.650542   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:04.650550   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:04.721753   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:04.721770   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:04.733512   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:04.733526   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:04.789859   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:04.782731   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.783273   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.784877   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.785331   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.786824   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:04.782731   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.783273   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.784877   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.785331   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.786824   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:04.789871   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:04.789881   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:04.853995   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:04.854014   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:07.383979   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:07.395090   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:07.395190   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:07.421890   41166 cri.go:89] found id: ""
	I1009 18:31:07.421903   41166 logs.go:282] 0 containers: []
	W1009 18:31:07.421909   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:07.421914   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:07.421966   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:07.448060   41166 cri.go:89] found id: ""
	I1009 18:31:07.448073   41166 logs.go:282] 0 containers: []
	W1009 18:31:07.448079   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:07.448083   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:07.448124   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:07.474470   41166 cri.go:89] found id: ""
	I1009 18:31:07.474482   41166 logs.go:282] 0 containers: []
	W1009 18:31:07.474488   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:07.474493   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:07.474536   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:07.501777   41166 cri.go:89] found id: ""
	I1009 18:31:07.501793   41166 logs.go:282] 0 containers: []
	W1009 18:31:07.501802   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:07.501808   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:07.501851   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:07.527522   41166 cri.go:89] found id: ""
	I1009 18:31:07.527534   41166 logs.go:282] 0 containers: []
	W1009 18:31:07.527540   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:07.527545   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:07.527597   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:07.552279   41166 cri.go:89] found id: ""
	I1009 18:31:07.552294   41166 logs.go:282] 0 containers: []
	W1009 18:31:07.552302   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:07.552307   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:07.552346   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:07.576431   41166 cri.go:89] found id: ""
	I1009 18:31:07.576446   41166 logs.go:282] 0 containers: []
	W1009 18:31:07.576454   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:07.576462   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:07.576470   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:07.643680   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:07.643696   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:07.655497   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:07.655511   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:07.710565   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:07.703625   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.704548   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.706134   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.706591   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.708100   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:07.703625   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.704548   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.706134   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.706591   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.708100   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:07.710581   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:07.710591   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:07.772201   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:07.772218   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:10.301414   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:10.312068   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:10.312119   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:10.336646   41166 cri.go:89] found id: ""
	I1009 18:31:10.336661   41166 logs.go:282] 0 containers: []
	W1009 18:31:10.336668   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:10.336672   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:10.336714   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:10.361765   41166 cri.go:89] found id: ""
	I1009 18:31:10.361779   41166 logs.go:282] 0 containers: []
	W1009 18:31:10.361788   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:10.361793   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:10.361849   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:10.386638   41166 cri.go:89] found id: ""
	I1009 18:31:10.386654   41166 logs.go:282] 0 containers: []
	W1009 18:31:10.386663   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:10.386669   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:10.386715   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:10.412340   41166 cri.go:89] found id: ""
	I1009 18:31:10.412353   41166 logs.go:282] 0 containers: []
	W1009 18:31:10.412359   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:10.412363   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:10.412402   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:10.437345   41166 cri.go:89] found id: ""
	I1009 18:31:10.437360   41166 logs.go:282] 0 containers: []
	W1009 18:31:10.437368   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:10.437372   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:10.437412   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:10.461775   41166 cri.go:89] found id: ""
	I1009 18:31:10.461790   41166 logs.go:282] 0 containers: []
	W1009 18:31:10.461797   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:10.461804   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:10.461851   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:10.486502   41166 cri.go:89] found id: ""
	I1009 18:31:10.486515   41166 logs.go:282] 0 containers: []
	W1009 18:31:10.486521   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:10.486528   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:10.486540   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:10.541525   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:10.534617   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.535191   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.536754   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.537206   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.538626   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:10.534617   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.535191   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.536754   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.537206   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.538626   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:10.541534   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:10.541543   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:10.605554   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:10.605573   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:10.633218   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:10.633233   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:10.698623   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:10.698640   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:13.212017   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:13.222887   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:13.222934   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:13.249527   41166 cri.go:89] found id: ""
	I1009 18:31:13.249545   41166 logs.go:282] 0 containers: []
	W1009 18:31:13.249553   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:13.249558   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:13.249613   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:13.276030   41166 cri.go:89] found id: ""
	I1009 18:31:13.276047   41166 logs.go:282] 0 containers: []
	W1009 18:31:13.276055   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:13.276062   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:13.276123   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:13.301696   41166 cri.go:89] found id: ""
	I1009 18:31:13.301712   41166 logs.go:282] 0 containers: []
	W1009 18:31:13.301722   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:13.301728   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:13.301779   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:13.327279   41166 cri.go:89] found id: ""
	I1009 18:31:13.327297   41166 logs.go:282] 0 containers: []
	W1009 18:31:13.327305   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:13.327314   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:13.327376   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:13.352370   41166 cri.go:89] found id: ""
	I1009 18:31:13.352387   41166 logs.go:282] 0 containers: []
	W1009 18:31:13.352396   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:13.352404   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:13.352455   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:13.376705   41166 cri.go:89] found id: ""
	I1009 18:31:13.376718   41166 logs.go:282] 0 containers: []
	W1009 18:31:13.376724   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:13.376728   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:13.376769   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:13.401874   41166 cri.go:89] found id: ""
	I1009 18:31:13.401887   41166 logs.go:282] 0 containers: []
	W1009 18:31:13.401893   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:13.401899   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:13.401908   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:13.468065   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:13.468083   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:13.479819   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:13.479833   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:13.536357   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:13.528543   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.529016   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.530652   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.532160   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.532602   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:13.528543   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.529016   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.530652   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.532160   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.532602   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:13.536371   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:13.536385   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:13.595534   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:13.595552   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:16.124813   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:16.135558   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:16.135630   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:16.161632   41166 cri.go:89] found id: ""
	I1009 18:31:16.161649   41166 logs.go:282] 0 containers: []
	W1009 18:31:16.161657   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:16.161662   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:16.161706   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:16.187466   41166 cri.go:89] found id: ""
	I1009 18:31:16.187480   41166 logs.go:282] 0 containers: []
	W1009 18:31:16.187486   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:16.187491   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:16.187532   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:16.214699   41166 cri.go:89] found id: ""
	I1009 18:31:16.214712   41166 logs.go:282] 0 containers: []
	W1009 18:31:16.214718   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:16.214722   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:16.214772   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:16.241600   41166 cri.go:89] found id: ""
	I1009 18:31:16.241617   41166 logs.go:282] 0 containers: []
	W1009 18:31:16.241622   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:16.241627   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:16.241670   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:16.266065   41166 cri.go:89] found id: ""
	I1009 18:31:16.266082   41166 logs.go:282] 0 containers: []
	W1009 18:31:16.266091   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:16.266097   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:16.266158   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:16.291053   41166 cri.go:89] found id: ""
	I1009 18:31:16.291067   41166 logs.go:282] 0 containers: []
	W1009 18:31:16.291073   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:16.291077   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:16.291123   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:16.316037   41166 cri.go:89] found id: ""
	I1009 18:31:16.316053   41166 logs.go:282] 0 containers: []
	W1009 18:31:16.316058   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:16.316065   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:16.316075   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:16.374518   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:16.374537   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:16.403805   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:16.403890   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:16.472344   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:16.472362   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:16.483905   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:16.483921   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:16.539056   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:16.532081   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.532735   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.534334   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.534743   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.536309   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:16.532081   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.532735   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.534334   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.534743   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.536309   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:19.039513   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:19.050212   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:19.050255   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:19.074802   41166 cri.go:89] found id: ""
	I1009 18:31:19.074819   41166 logs.go:282] 0 containers: []
	W1009 18:31:19.074828   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:19.074834   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:19.074879   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:19.101554   41166 cri.go:89] found id: ""
	I1009 18:31:19.101568   41166 logs.go:282] 0 containers: []
	W1009 18:31:19.101574   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:19.101579   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:19.101618   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:19.126592   41166 cri.go:89] found id: ""
	I1009 18:31:19.126604   41166 logs.go:282] 0 containers: []
	W1009 18:31:19.126610   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:19.126614   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:19.126652   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:19.151096   41166 cri.go:89] found id: ""
	I1009 18:31:19.151108   41166 logs.go:282] 0 containers: []
	W1009 18:31:19.151117   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:19.151124   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:19.151179   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:19.175712   41166 cri.go:89] found id: ""
	I1009 18:31:19.175730   41166 logs.go:282] 0 containers: []
	W1009 18:31:19.175736   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:19.175740   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:19.175781   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:19.200064   41166 cri.go:89] found id: ""
	I1009 18:31:19.200080   41166 logs.go:282] 0 containers: []
	W1009 18:31:19.200088   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:19.200094   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:19.200161   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:19.227391   41166 cri.go:89] found id: ""
	I1009 18:31:19.227406   41166 logs.go:282] 0 containers: []
	W1009 18:31:19.227414   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:19.227424   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:19.227434   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:19.289413   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:19.289430   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:19.318081   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:19.318095   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:19.387739   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:19.387754   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:19.399028   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:19.399046   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:19.454538   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:19.447438   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.447971   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.449548   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.449995   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.451532   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:19.447438   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.447971   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.449548   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.449995   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.451532   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:21.956227   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:21.966936   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:21.966995   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:21.991378   41166 cri.go:89] found id: ""
	I1009 18:31:21.991391   41166 logs.go:282] 0 containers: []
	W1009 18:31:21.991397   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:21.991402   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:21.991440   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:22.016783   41166 cri.go:89] found id: ""
	I1009 18:31:22.016796   41166 logs.go:282] 0 containers: []
	W1009 18:31:22.016803   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:22.016808   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:22.016848   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:22.041987   41166 cri.go:89] found id: ""
	I1009 18:31:22.042003   41166 logs.go:282] 0 containers: []
	W1009 18:31:22.042012   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:22.042018   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:22.042068   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:22.067709   41166 cri.go:89] found id: ""
	I1009 18:31:22.067722   41166 logs.go:282] 0 containers: []
	W1009 18:31:22.067727   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:22.067735   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:22.067787   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:22.093654   41166 cri.go:89] found id: ""
	I1009 18:31:22.093666   41166 logs.go:282] 0 containers: []
	W1009 18:31:22.093671   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:22.093675   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:22.093718   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:22.119263   41166 cri.go:89] found id: ""
	I1009 18:31:22.119276   41166 logs.go:282] 0 containers: []
	W1009 18:31:22.119306   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:22.119310   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:22.119350   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:22.143920   41166 cri.go:89] found id: ""
	I1009 18:31:22.143933   41166 logs.go:282] 0 containers: []
	W1009 18:31:22.143939   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:22.143945   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:22.143954   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:22.172713   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:22.172727   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:22.241689   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:22.241717   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:22.253927   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:22.253942   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:22.308454   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:22.301618   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.302105   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.303689   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.304160   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.305712   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:22.301618   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.302105   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.303689   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.304160   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.305712   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:22.308469   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:22.308483   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:24.874240   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:24.885199   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:24.885251   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:24.912332   41166 cri.go:89] found id: ""
	I1009 18:31:24.912355   41166 logs.go:282] 0 containers: []
	W1009 18:31:24.912363   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:24.912369   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:24.912510   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:24.938534   41166 cri.go:89] found id: ""
	I1009 18:31:24.938551   41166 logs.go:282] 0 containers: []
	W1009 18:31:24.938557   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:24.938564   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:24.938611   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:24.965113   41166 cri.go:89] found id: ""
	I1009 18:31:24.965125   41166 logs.go:282] 0 containers: []
	W1009 18:31:24.965131   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:24.965151   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:24.965204   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:24.991845   41166 cri.go:89] found id: ""
	I1009 18:31:24.991858   41166 logs.go:282] 0 containers: []
	W1009 18:31:24.991864   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:24.991868   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:24.991910   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:25.018693   41166 cri.go:89] found id: ""
	I1009 18:31:25.018706   41166 logs.go:282] 0 containers: []
	W1009 18:31:25.018711   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:25.018717   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:25.018756   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:25.044931   41166 cri.go:89] found id: ""
	I1009 18:31:25.044948   41166 logs.go:282] 0 containers: []
	W1009 18:31:25.044957   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:25.044963   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:25.045014   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:25.071449   41166 cri.go:89] found id: ""
	I1009 18:31:25.071465   41166 logs.go:282] 0 containers: []
	W1009 18:31:25.071474   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:25.071483   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:25.071495   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:25.138301   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:25.138320   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:25.150561   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:25.150575   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:25.208095   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:25.201000   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.201519   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.203190   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.203673   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.205213   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:25.201000   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.201519   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.203190   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.203673   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.205213   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:25.208105   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:25.208114   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:25.272810   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:25.272829   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:27.804229   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:27.815074   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:27.815120   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:27.840171   41166 cri.go:89] found id: ""
	I1009 18:31:27.840188   41166 logs.go:282] 0 containers: []
	W1009 18:31:27.840196   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:27.840200   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:27.840274   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:27.866963   41166 cri.go:89] found id: ""
	I1009 18:31:27.866981   41166 logs.go:282] 0 containers: []
	W1009 18:31:27.866990   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:27.866996   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:27.867076   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:27.893152   41166 cri.go:89] found id: ""
	I1009 18:31:27.893169   41166 logs.go:282] 0 containers: []
	W1009 18:31:27.893177   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:27.893183   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:27.893235   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:27.920337   41166 cri.go:89] found id: ""
	I1009 18:31:27.920350   41166 logs.go:282] 0 containers: []
	W1009 18:31:27.920356   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:27.920361   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:27.920403   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:27.945940   41166 cri.go:89] found id: ""
	I1009 18:31:27.945956   41166 logs.go:282] 0 containers: []
	W1009 18:31:27.945964   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:27.945971   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:27.946036   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:27.971578   41166 cri.go:89] found id: ""
	I1009 18:31:27.971594   41166 logs.go:282] 0 containers: []
	W1009 18:31:27.971600   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:27.971604   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:27.971651   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:27.998876   41166 cri.go:89] found id: ""
	I1009 18:31:27.998890   41166 logs.go:282] 0 containers: []
	W1009 18:31:27.998898   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:27.998907   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:27.998919   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:28.060031   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:28.060050   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:28.090280   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:28.090294   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:28.155986   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:28.156004   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:28.167898   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:28.167912   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:28.224480   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:28.217373   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.217904   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.219580   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.219973   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.221548   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:28.217373   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.217904   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.219580   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.219973   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.221548   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:30.726158   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:30.736658   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:30.736713   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:30.762096   41166 cri.go:89] found id: ""
	I1009 18:31:30.762111   41166 logs.go:282] 0 containers: []
	W1009 18:31:30.762119   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:30.762125   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:30.762193   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:30.787132   41166 cri.go:89] found id: ""
	I1009 18:31:30.787161   41166 logs.go:282] 0 containers: []
	W1009 18:31:30.787169   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:30.787175   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:30.787234   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:30.813496   41166 cri.go:89] found id: ""
	I1009 18:31:30.813510   41166 logs.go:282] 0 containers: []
	W1009 18:31:30.813515   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:30.813519   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:30.813558   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:30.838073   41166 cri.go:89] found id: ""
	I1009 18:31:30.838089   41166 logs.go:282] 0 containers: []
	W1009 18:31:30.838098   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:30.838104   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:30.838167   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:30.864286   41166 cri.go:89] found id: ""
	I1009 18:31:30.864301   41166 logs.go:282] 0 containers: []
	W1009 18:31:30.864307   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:30.864312   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:30.864353   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:30.890806   41166 cri.go:89] found id: ""
	I1009 18:31:30.890819   41166 logs.go:282] 0 containers: []
	W1009 18:31:30.890825   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:30.890830   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:30.890885   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:30.917461   41166 cri.go:89] found id: ""
	I1009 18:31:30.917474   41166 logs.go:282] 0 containers: []
	W1009 18:31:30.917480   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:30.917487   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:30.917496   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:30.947122   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:30.947157   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:31.013114   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:31.013130   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:31.025904   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:31.025924   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:31.081194   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:31.074116   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.074697   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.076284   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.076747   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.078298   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:31.074116   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.074697   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.076284   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.076747   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.078298   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:31.081206   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:31.081217   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:33.641553   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:33.652051   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:33.652105   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:33.676453   41166 cri.go:89] found id: ""
	I1009 18:31:33.676467   41166 logs.go:282] 0 containers: []
	W1009 18:31:33.676473   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:33.676477   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:33.676517   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:33.701838   41166 cri.go:89] found id: ""
	I1009 18:31:33.701854   41166 logs.go:282] 0 containers: []
	W1009 18:31:33.701862   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:33.701868   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:33.701916   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:33.727771   41166 cri.go:89] found id: ""
	I1009 18:31:33.727787   41166 logs.go:282] 0 containers: []
	W1009 18:31:33.727794   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:33.727799   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:33.727839   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:33.753654   41166 cri.go:89] found id: ""
	I1009 18:31:33.753670   41166 logs.go:282] 0 containers: []
	W1009 18:31:33.753681   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:33.753686   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:33.753731   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:33.780405   41166 cri.go:89] found id: ""
	I1009 18:31:33.780421   41166 logs.go:282] 0 containers: []
	W1009 18:31:33.780430   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:33.780436   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:33.780477   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:33.807435   41166 cri.go:89] found id: ""
	I1009 18:31:33.807448   41166 logs.go:282] 0 containers: []
	W1009 18:31:33.807454   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:33.807458   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:33.807502   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:33.833608   41166 cri.go:89] found id: ""
	I1009 18:31:33.833625   41166 logs.go:282] 0 containers: []
	W1009 18:31:33.833633   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:33.833642   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:33.833655   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:33.900086   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:33.900106   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:33.912409   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:33.912429   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:33.968532   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:33.961720   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.962278   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.963911   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.964427   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.965875   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:33.961720   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.962278   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.963911   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.964427   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.965875   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:33.968541   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:33.968551   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:34.031879   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:34.031899   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:36.563728   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:36.574356   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:36.574399   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:36.600194   41166 cri.go:89] found id: ""
	I1009 18:31:36.600209   41166 logs.go:282] 0 containers: []
	W1009 18:31:36.600217   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:36.600223   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:36.600284   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:36.626075   41166 cri.go:89] found id: ""
	I1009 18:31:36.626096   41166 logs.go:282] 0 containers: []
	W1009 18:31:36.626106   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:36.626111   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:36.626182   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:36.652078   41166 cri.go:89] found id: ""
	I1009 18:31:36.652098   41166 logs.go:282] 0 containers: []
	W1009 18:31:36.652104   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:36.652109   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:36.652170   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:36.677462   41166 cri.go:89] found id: ""
	I1009 18:31:36.677474   41166 logs.go:282] 0 containers: []
	W1009 18:31:36.677480   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:36.677484   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:36.677522   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:36.703778   41166 cri.go:89] found id: ""
	I1009 18:31:36.703793   41166 logs.go:282] 0 containers: []
	W1009 18:31:36.703801   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:36.703807   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:36.703856   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:36.729868   41166 cri.go:89] found id: ""
	I1009 18:31:36.729884   41166 logs.go:282] 0 containers: []
	W1009 18:31:36.729893   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:36.729899   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:36.729942   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:36.756775   41166 cri.go:89] found id: ""
	I1009 18:31:36.756787   41166 logs.go:282] 0 containers: []
	W1009 18:31:36.756793   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:36.756801   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:36.756810   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:36.826838   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:36.826854   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:36.838705   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:36.838718   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:36.894816   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:36.887889   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.888440   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.890010   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.890538   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.891994   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:36.887889   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.888440   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.890010   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.890538   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.891994   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:36.894826   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:36.894838   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:36.959865   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:36.959882   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:39.490368   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:39.501284   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:39.501335   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:39.527003   41166 cri.go:89] found id: ""
	I1009 18:31:39.527016   41166 logs.go:282] 0 containers: []
	W1009 18:31:39.527022   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:39.527026   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:39.527071   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:39.553355   41166 cri.go:89] found id: ""
	I1009 18:31:39.553370   41166 logs.go:282] 0 containers: []
	W1009 18:31:39.553379   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:39.553384   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:39.553425   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:39.579105   41166 cri.go:89] found id: ""
	I1009 18:31:39.579121   41166 logs.go:282] 0 containers: []
	W1009 18:31:39.579128   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:39.579133   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:39.579203   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:39.604899   41166 cri.go:89] found id: ""
	I1009 18:31:39.604913   41166 logs.go:282] 0 containers: []
	W1009 18:31:39.604919   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:39.604928   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:39.604985   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:39.630635   41166 cri.go:89] found id: ""
	I1009 18:31:39.630647   41166 logs.go:282] 0 containers: []
	W1009 18:31:39.630653   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:39.630657   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:39.630701   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:39.656106   41166 cri.go:89] found id: ""
	I1009 18:31:39.656121   41166 logs.go:282] 0 containers: []
	W1009 18:31:39.656129   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:39.656148   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:39.656207   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:39.681655   41166 cri.go:89] found id: ""
	I1009 18:31:39.681667   41166 logs.go:282] 0 containers: []
	W1009 18:31:39.681673   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:39.681680   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:39.681688   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:39.744126   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:39.744152   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:39.772799   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:39.772812   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:39.844571   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:39.844590   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:39.856246   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:39.856263   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:39.911854   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:39.905117   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.905586   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.907188   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.907677   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.909231   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:39.905117   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.905586   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.907188   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.907677   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.909231   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:42.413528   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:42.424343   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:42.424407   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:42.450128   41166 cri.go:89] found id: ""
	I1009 18:31:42.450165   41166 logs.go:282] 0 containers: []
	W1009 18:31:42.450173   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:42.450180   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:42.450239   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:42.475946   41166 cri.go:89] found id: ""
	I1009 18:31:42.475961   41166 logs.go:282] 0 containers: []
	W1009 18:31:42.475970   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:42.475976   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:42.476031   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:42.502865   41166 cri.go:89] found id: ""
	I1009 18:31:42.502881   41166 logs.go:282] 0 containers: []
	W1009 18:31:42.502890   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:42.502896   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:42.502946   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:42.530798   41166 cri.go:89] found id: ""
	I1009 18:31:42.530814   41166 logs.go:282] 0 containers: []
	W1009 18:31:42.530823   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:42.530829   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:42.530879   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:42.556524   41166 cri.go:89] found id: ""
	I1009 18:31:42.556539   41166 logs.go:282] 0 containers: []
	W1009 18:31:42.556548   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:42.556554   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:42.556605   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:42.582936   41166 cri.go:89] found id: ""
	I1009 18:31:42.582953   41166 logs.go:282] 0 containers: []
	W1009 18:31:42.582961   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:42.582967   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:42.583055   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:42.609400   41166 cri.go:89] found id: ""
	I1009 18:31:42.609415   41166 logs.go:282] 0 containers: []
	W1009 18:31:42.609424   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:42.609433   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:42.609444   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:42.671451   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:42.671468   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:42.700813   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:42.700832   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:42.769841   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:42.769859   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:42.782244   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:42.782261   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:42.840011   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:42.832755   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.833376   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.834917   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.835376   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.836976   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:42.832755   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.833376   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.834917   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.835376   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.836976   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:45.340705   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:45.350991   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:45.351034   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:45.375913   41166 cri.go:89] found id: ""
	I1009 18:31:45.375926   41166 logs.go:282] 0 containers: []
	W1009 18:31:45.375932   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:45.375936   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:45.375974   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:45.402366   41166 cri.go:89] found id: ""
	I1009 18:31:45.402380   41166 logs.go:282] 0 containers: []
	W1009 18:31:45.402386   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:45.402391   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:45.402432   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:45.428247   41166 cri.go:89] found id: ""
	I1009 18:31:45.428263   41166 logs.go:282] 0 containers: []
	W1009 18:31:45.428272   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:45.428278   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:45.428332   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:45.454072   41166 cri.go:89] found id: ""
	I1009 18:31:45.454087   41166 logs.go:282] 0 containers: []
	W1009 18:31:45.454094   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:45.454103   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:45.454173   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:45.479985   41166 cri.go:89] found id: ""
	I1009 18:31:45.480000   41166 logs.go:282] 0 containers: []
	W1009 18:31:45.480006   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:45.480012   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:45.480064   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:45.505956   41166 cri.go:89] found id: ""
	I1009 18:31:45.505972   41166 logs.go:282] 0 containers: []
	W1009 18:31:45.505980   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:45.505986   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:45.506041   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:45.530757   41166 cri.go:89] found id: ""
	I1009 18:31:45.530770   41166 logs.go:282] 0 containers: []
	W1009 18:31:45.530775   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:45.530782   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:45.530791   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:45.597676   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:45.597693   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:45.609290   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:45.609305   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:45.666583   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:45.659856   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.660431   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.661987   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.662451   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.663976   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:45.659856   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.660431   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.661987   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.662451   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.663976   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:45.666593   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:45.666604   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:45.730000   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:45.730018   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:48.259768   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:48.270482   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:48.270528   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:48.297438   41166 cri.go:89] found id: ""
	I1009 18:31:48.297454   41166 logs.go:282] 0 containers: []
	W1009 18:31:48.297462   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:48.297467   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:48.297510   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:48.323680   41166 cri.go:89] found id: ""
	I1009 18:31:48.323695   41166 logs.go:282] 0 containers: []
	W1009 18:31:48.323704   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:48.323710   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:48.323756   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:48.348422   41166 cri.go:89] found id: ""
	I1009 18:31:48.348437   41166 logs.go:282] 0 containers: []
	W1009 18:31:48.348445   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:48.348450   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:48.348507   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:48.373232   41166 cri.go:89] found id: ""
	I1009 18:31:48.373247   41166 logs.go:282] 0 containers: []
	W1009 18:31:48.373253   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:48.373263   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:48.373306   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:48.398755   41166 cri.go:89] found id: ""
	I1009 18:31:48.398770   41166 logs.go:282] 0 containers: []
	W1009 18:31:48.398776   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:48.398781   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:48.398822   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:48.423977   41166 cri.go:89] found id: ""
	I1009 18:31:48.423993   41166 logs.go:282] 0 containers: []
	W1009 18:31:48.423999   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:48.424004   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:48.424056   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:48.450473   41166 cri.go:89] found id: ""
	I1009 18:31:48.450486   41166 logs.go:282] 0 containers: []
	W1009 18:31:48.450492   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:48.450499   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:48.450510   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:48.461974   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:48.461997   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:48.519875   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:48.513250   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.513778   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.515240   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.515817   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.517350   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:48.513250   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.513778   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.515240   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.515817   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.517350   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:48.519884   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:48.519893   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:48.579801   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:48.579819   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:48.609008   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:48.609031   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:51.179735   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:51.190623   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:51.190689   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:51.215839   41166 cri.go:89] found id: ""
	I1009 18:31:51.215854   41166 logs.go:282] 0 containers: []
	W1009 18:31:51.215860   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:51.215866   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:51.215919   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:51.241754   41166 cri.go:89] found id: ""
	I1009 18:31:51.241771   41166 logs.go:282] 0 containers: []
	W1009 18:31:51.241781   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:51.241786   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:51.241834   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:51.269204   41166 cri.go:89] found id: ""
	I1009 18:31:51.269221   41166 logs.go:282] 0 containers: []
	W1009 18:31:51.269227   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:51.269233   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:51.269288   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:51.296498   41166 cri.go:89] found id: ""
	I1009 18:31:51.296514   41166 logs.go:282] 0 containers: []
	W1009 18:31:51.296522   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:51.296527   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:51.296573   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:51.323034   41166 cri.go:89] found id: ""
	I1009 18:31:51.323049   41166 logs.go:282] 0 containers: []
	W1009 18:31:51.323057   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:51.323063   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:51.323112   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:51.348104   41166 cri.go:89] found id: ""
	I1009 18:31:51.348119   41166 logs.go:282] 0 containers: []
	W1009 18:31:51.348125   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:51.348131   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:51.348199   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:51.374228   41166 cri.go:89] found id: ""
	I1009 18:31:51.374242   41166 logs.go:282] 0 containers: []
	W1009 18:31:51.374248   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:51.374255   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:51.374265   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:51.403810   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:51.403825   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:51.474611   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:51.474630   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:51.486750   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:51.486766   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:51.542637   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:51.535796   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.536370   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.537923   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.538394   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.539906   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:51.535796   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.536370   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.537923   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.538394   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.539906   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:51.542656   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:51.542666   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:54.103184   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:54.114409   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:54.114455   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:54.140634   41166 cri.go:89] found id: ""
	I1009 18:31:54.140646   41166 logs.go:282] 0 containers: []
	W1009 18:31:54.140652   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:54.140656   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:54.140703   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:54.166896   41166 cri.go:89] found id: ""
	I1009 18:31:54.166911   41166 logs.go:282] 0 containers: []
	W1009 18:31:54.166918   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:54.166922   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:54.166962   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:54.193155   41166 cri.go:89] found id: ""
	I1009 18:31:54.193170   41166 logs.go:282] 0 containers: []
	W1009 18:31:54.193176   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:54.193181   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:54.193222   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:54.217754   41166 cri.go:89] found id: ""
	I1009 18:31:54.217767   41166 logs.go:282] 0 containers: []
	W1009 18:31:54.217772   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:54.217777   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:54.217819   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:54.243823   41166 cri.go:89] found id: ""
	I1009 18:31:54.243837   41166 logs.go:282] 0 containers: []
	W1009 18:31:54.243843   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:54.243848   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:54.243887   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:54.271827   41166 cri.go:89] found id: ""
	I1009 18:31:54.271841   41166 logs.go:282] 0 containers: []
	W1009 18:31:54.271847   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:54.271852   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:54.271895   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:54.297907   41166 cri.go:89] found id: ""
	I1009 18:31:54.297920   41166 logs.go:282] 0 containers: []
	W1009 18:31:54.297925   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:54.297932   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:54.297942   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:54.365493   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:54.365510   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:54.377258   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:54.377275   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:54.432221   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:54.425355   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.425907   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.427547   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.427972   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.429614   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:54.425355   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.425907   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.427547   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.427972   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.429614   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:54.432234   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:54.432244   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:54.492172   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:54.492189   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:57.022444   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:57.033223   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:57.033285   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:57.059246   41166 cri.go:89] found id: ""
	I1009 18:31:57.059267   41166 logs.go:282] 0 containers: []
	W1009 18:31:57.059273   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:57.059277   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:57.059348   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:57.084187   41166 cri.go:89] found id: ""
	I1009 18:31:57.084199   41166 logs.go:282] 0 containers: []
	W1009 18:31:57.084205   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:57.084209   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:57.084250   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:57.109765   41166 cri.go:89] found id: ""
	I1009 18:31:57.109778   41166 logs.go:282] 0 containers: []
	W1009 18:31:57.109784   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:57.109788   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:57.109828   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:57.135796   41166 cri.go:89] found id: ""
	I1009 18:31:57.135809   41166 logs.go:282] 0 containers: []
	W1009 18:31:57.135817   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:57.135824   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:57.136027   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:57.162702   41166 cri.go:89] found id: ""
	I1009 18:31:57.162715   41166 logs.go:282] 0 containers: []
	W1009 18:31:57.162720   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:57.162724   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:57.162773   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:57.189575   41166 cri.go:89] found id: ""
	I1009 18:31:57.189588   41166 logs.go:282] 0 containers: []
	W1009 18:31:57.189594   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:57.189598   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:57.189639   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:57.214916   41166 cri.go:89] found id: ""
	I1009 18:31:57.214931   41166 logs.go:282] 0 containers: []
	W1009 18:31:57.214939   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:57.214946   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:57.214956   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:57.226333   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:57.226347   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:57.282176   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:57.275375   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.275847   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.277403   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.277780   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.279430   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:57.275375   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.275847   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.277403   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.277780   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.279430   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:57.282186   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:57.282196   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:57.341981   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:57.341999   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:57.372028   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:57.372043   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:59.940902   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:59.951810   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:59.951853   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:59.977888   41166 cri.go:89] found id: ""
	I1009 18:31:59.977902   41166 logs.go:282] 0 containers: []
	W1009 18:31:59.977908   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:59.977912   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:59.977977   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:32:00.004236   41166 cri.go:89] found id: ""
	I1009 18:32:00.004252   41166 logs.go:282] 0 containers: []
	W1009 18:32:00.004265   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:32:00.004293   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:32:00.004347   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:32:00.030808   41166 cri.go:89] found id: ""
	I1009 18:32:00.030826   41166 logs.go:282] 0 containers: []
	W1009 18:32:00.030836   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:32:00.030842   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:32:00.030895   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:32:00.056760   41166 cri.go:89] found id: ""
	I1009 18:32:00.056772   41166 logs.go:282] 0 containers: []
	W1009 18:32:00.056778   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:32:00.056782   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:32:00.056826   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:32:00.083048   41166 cri.go:89] found id: ""
	I1009 18:32:00.083062   41166 logs.go:282] 0 containers: []
	W1009 18:32:00.083068   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:32:00.083072   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:32:00.083116   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:32:00.109679   41166 cri.go:89] found id: ""
	I1009 18:32:00.109693   41166 logs.go:282] 0 containers: []
	W1009 18:32:00.109699   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:32:00.109704   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:32:00.109753   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:32:00.135808   41166 cri.go:89] found id: ""
	I1009 18:32:00.135820   41166 logs.go:282] 0 containers: []
	W1009 18:32:00.135826   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:32:00.135833   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:32:00.135841   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:32:00.192719   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:32:00.185431   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.185945   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.187601   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.188147   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.189704   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:32:00.185431   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.185945   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.187601   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.188147   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.189704   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:32:00.192732   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:32:00.192744   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:32:00.253264   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:32:00.253287   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:32:00.283450   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:32:00.283463   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:32:00.350291   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:32:00.350309   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:32:02.863750   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:32:02.874396   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:32:02.874434   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:32:02.900500   41166 cri.go:89] found id: ""
	I1009 18:32:02.900513   41166 logs.go:282] 0 containers: []
	W1009 18:32:02.900519   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:32:02.900523   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:32:02.900563   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:32:02.926067   41166 cri.go:89] found id: ""
	I1009 18:32:02.926083   41166 logs.go:282] 0 containers: []
	W1009 18:32:02.926092   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:32:02.926099   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:32:02.926157   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:32:02.951112   41166 cri.go:89] found id: ""
	I1009 18:32:02.951127   41166 logs.go:282] 0 containers: []
	W1009 18:32:02.951147   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:32:02.951154   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:32:02.951202   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:32:02.976038   41166 cri.go:89] found id: ""
	I1009 18:32:02.976052   41166 logs.go:282] 0 containers: []
	W1009 18:32:02.976057   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:32:02.976062   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:32:02.976114   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:32:03.001712   41166 cri.go:89] found id: ""
	I1009 18:32:03.001724   41166 logs.go:282] 0 containers: []
	W1009 18:32:03.001730   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:32:03.001734   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:32:03.001773   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:32:03.028181   41166 cri.go:89] found id: ""
	I1009 18:32:03.028195   41166 logs.go:282] 0 containers: []
	W1009 18:32:03.028201   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:32:03.028205   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:32:03.028247   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:32:03.054529   41166 cri.go:89] found id: ""
	I1009 18:32:03.054541   41166 logs.go:282] 0 containers: []
	W1009 18:32:03.054547   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:32:03.054554   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:32:03.054565   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:32:03.122196   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:32:03.122214   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:32:03.133617   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:32:03.133633   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:32:03.189282   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:32:03.182610   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.183115   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.184674   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.185052   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.186556   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:32:03.182610   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.183115   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.184674   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.185052   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.186556   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:32:03.189291   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:32:03.189301   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:32:03.252856   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:32:03.252874   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:32:05.784812   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:32:05.795352   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:32:05.795402   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:32:05.820276   41166 cri.go:89] found id: ""
	I1009 18:32:05.820289   41166 logs.go:282] 0 containers: []
	W1009 18:32:05.820295   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:32:05.820300   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:32:05.820341   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:32:05.846395   41166 cri.go:89] found id: ""
	I1009 18:32:05.846408   41166 logs.go:282] 0 containers: []
	W1009 18:32:05.846414   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:32:05.846418   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:32:05.846469   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:32:05.872185   41166 cri.go:89] found id: ""
	I1009 18:32:05.872199   41166 logs.go:282] 0 containers: []
	W1009 18:32:05.872205   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:32:05.872209   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:32:05.872254   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:32:05.898231   41166 cri.go:89] found id: ""
	I1009 18:32:05.898251   41166 logs.go:282] 0 containers: []
	W1009 18:32:05.898257   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:32:05.898263   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:32:05.898303   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:32:05.923683   41166 cri.go:89] found id: ""
	I1009 18:32:05.923699   41166 logs.go:282] 0 containers: []
	W1009 18:32:05.923707   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:32:05.923712   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:32:05.923755   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:32:05.949168   41166 cri.go:89] found id: ""
	I1009 18:32:05.949183   41166 logs.go:282] 0 containers: []
	W1009 18:32:05.949188   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:32:05.949193   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:32:05.949236   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:32:05.975320   41166 cri.go:89] found id: ""
	I1009 18:32:05.975332   41166 logs.go:282] 0 containers: []
	W1009 18:32:05.975338   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:32:05.975344   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:32:05.975354   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:32:06.041809   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:32:06.041827   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:32:06.054016   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:32:06.054040   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:32:06.110078   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:32:06.103223   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.103767   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.105448   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.105875   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.107466   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:32:06.103223   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.103767   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.105448   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.105875   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.107466   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:32:06.110088   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:32:06.110097   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:32:06.172545   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:32:06.172564   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:32:08.701488   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:32:08.712540   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:32:08.712594   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:32:08.738583   41166 cri.go:89] found id: ""
	I1009 18:32:08.738601   41166 logs.go:282] 0 containers: []
	W1009 18:32:08.738608   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:32:08.738613   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:32:08.738654   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:32:08.764379   41166 cri.go:89] found id: ""
	I1009 18:32:08.764396   41166 logs.go:282] 0 containers: []
	W1009 18:32:08.764404   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:32:08.764412   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:32:08.764466   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:32:08.790325   41166 cri.go:89] found id: ""
	I1009 18:32:08.790351   41166 logs.go:282] 0 containers: []
	W1009 18:32:08.790360   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:32:08.790367   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:32:08.790417   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:32:08.816765   41166 cri.go:89] found id: ""
	I1009 18:32:08.816780   41166 logs.go:282] 0 containers: []
	W1009 18:32:08.816788   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:32:08.816792   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:32:08.816844   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:32:08.842038   41166 cri.go:89] found id: ""
	I1009 18:32:08.842050   41166 logs.go:282] 0 containers: []
	W1009 18:32:08.842055   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:32:08.842060   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:32:08.842119   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:32:08.868221   41166 cri.go:89] found id: ""
	I1009 18:32:08.868236   41166 logs.go:282] 0 containers: []
	W1009 18:32:08.868243   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:32:08.868248   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:32:08.868291   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:32:08.894780   41166 cri.go:89] found id: ""
	I1009 18:32:08.894797   41166 logs.go:282] 0 containers: []
	W1009 18:32:08.894804   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:32:08.894810   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:32:08.894820   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:32:08.952094   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:32:08.944952   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.945523   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.947209   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.947687   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.949320   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:32:08.944952   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.945523   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.947209   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.947687   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.949320   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:32:08.952107   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:32:08.952121   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:32:09.012751   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:32:09.012769   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:32:09.042946   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:32:09.042958   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:32:09.111059   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:32:09.111076   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:32:11.624407   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:32:11.635246   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:32:11.635303   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:32:11.661128   41166 cri.go:89] found id: ""
	I1009 18:32:11.661159   41166 logs.go:282] 0 containers: []
	W1009 18:32:11.661167   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:32:11.661173   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:32:11.661225   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:32:11.685846   41166 cri.go:89] found id: ""
	I1009 18:32:11.685860   41166 logs.go:282] 0 containers: []
	W1009 18:32:11.685866   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:32:11.685870   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:32:11.685909   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:32:11.711700   41166 cri.go:89] found id: ""
	I1009 18:32:11.711714   41166 logs.go:282] 0 containers: []
	W1009 18:32:11.711719   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:32:11.711723   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:32:11.711770   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:32:11.737208   41166 cri.go:89] found id: ""
	I1009 18:32:11.737220   41166 logs.go:282] 0 containers: []
	W1009 18:32:11.737225   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:32:11.737230   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:32:11.737278   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:32:11.762359   41166 cri.go:89] found id: ""
	I1009 18:32:11.762370   41166 logs.go:282] 0 containers: []
	W1009 18:32:11.762376   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:32:11.762380   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:32:11.762430   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:32:11.787996   41166 cri.go:89] found id: ""
	I1009 18:32:11.788011   41166 logs.go:282] 0 containers: []
	W1009 18:32:11.788019   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:32:11.788024   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:32:11.788084   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:32:11.812657   41166 cri.go:89] found id: ""
	I1009 18:32:11.812671   41166 logs.go:282] 0 containers: []
	W1009 18:32:11.812677   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:32:11.812685   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:32:11.812694   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:32:11.879681   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:32:11.879697   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:32:11.891109   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:32:11.891124   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:32:11.947646   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:32:11.940720   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.941253   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.942799   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.943257   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.944825   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:32:11.940720   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.941253   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.942799   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.943257   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.944825   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:32:11.947659   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:32:11.947672   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:32:12.013733   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:32:12.013750   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:32:14.545559   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:32:14.556586   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:32:14.556634   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:32:14.584233   41166 cri.go:89] found id: ""
	I1009 18:32:14.584250   41166 logs.go:282] 0 containers: []
	W1009 18:32:14.584258   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:32:14.584263   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:32:14.584312   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:32:14.610477   41166 cri.go:89] found id: ""
	I1009 18:32:14.610493   41166 logs.go:282] 0 containers: []
	W1009 18:32:14.610500   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:32:14.610505   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:32:14.610560   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:32:14.635807   41166 cri.go:89] found id: ""
	I1009 18:32:14.635824   41166 logs.go:282] 0 containers: []
	W1009 18:32:14.635832   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:32:14.635837   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:32:14.635880   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:32:14.661016   41166 cri.go:89] found id: ""
	I1009 18:32:14.661034   41166 logs.go:282] 0 containers: []
	W1009 18:32:14.661043   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:32:14.661049   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:32:14.661098   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:32:14.689198   41166 cri.go:89] found id: ""
	I1009 18:32:14.689212   41166 logs.go:282] 0 containers: []
	W1009 18:32:14.689217   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:32:14.689223   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:32:14.689278   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:32:14.714892   41166 cri.go:89] found id: ""
	I1009 18:32:14.714908   41166 logs.go:282] 0 containers: []
	W1009 18:32:14.714917   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:32:14.714923   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:32:14.714971   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:32:14.740412   41166 cri.go:89] found id: ""
	I1009 18:32:14.740425   41166 logs.go:282] 0 containers: []
	W1009 18:32:14.740433   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:32:14.740440   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:32:14.740449   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:32:14.803421   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:32:14.803439   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:32:14.831580   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:32:14.831594   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:32:14.901628   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:32:14.901653   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:32:14.914304   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:32:14.914326   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:32:14.971146   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:32:14.964264   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.964764   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.966352   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.966731   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.968402   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:32:14.964264   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.964764   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.966352   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.966731   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.968402   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:32:17.472817   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:32:17.483574   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:32:17.483619   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:32:17.510868   41166 cri.go:89] found id: ""
	I1009 18:32:17.510882   41166 logs.go:282] 0 containers: []
	W1009 18:32:17.510891   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:32:17.510896   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:32:17.510956   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:32:17.537306   41166 cri.go:89] found id: ""
	I1009 18:32:17.537319   41166 logs.go:282] 0 containers: []
	W1009 18:32:17.537325   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:32:17.537329   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:32:17.537372   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:32:17.564957   41166 cri.go:89] found id: ""
	I1009 18:32:17.564972   41166 logs.go:282] 0 containers: []
	W1009 18:32:17.564978   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:32:17.564984   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:32:17.565039   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:32:17.591401   41166 cri.go:89] found id: ""
	I1009 18:32:17.591418   41166 logs.go:282] 0 containers: []
	W1009 18:32:17.591425   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:32:17.591430   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:32:17.591476   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:32:17.617237   41166 cri.go:89] found id: ""
	I1009 18:32:17.617250   41166 logs.go:282] 0 containers: []
	W1009 18:32:17.617256   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:32:17.617260   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:32:17.617302   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:32:17.642328   41166 cri.go:89] found id: ""
	I1009 18:32:17.642342   41166 logs.go:282] 0 containers: []
	W1009 18:32:17.642348   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:32:17.642352   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:32:17.642400   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:32:17.668302   41166 cri.go:89] found id: ""
	I1009 18:32:17.668315   41166 logs.go:282] 0 containers: []
	W1009 18:32:17.668321   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:32:17.668327   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:32:17.668336   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:32:17.679448   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:32:17.679463   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:32:17.736174   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:32:17.728959   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.729672   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.731395   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.731844   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.733446   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:32:17.728959   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.729672   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.731395   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.731844   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.733446   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:32:17.736227   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:32:17.736236   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:32:17.795423   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:32:17.795442   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:32:17.824553   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:32:17.824567   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:32:20.394282   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:32:20.405003   41166 kubeadm.go:601] duration metric: took 4m2.649024916s to restartPrimaryControlPlane
	W1009 18:32:20.405078   41166 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1009 18:32:20.405162   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 18:32:20.850567   41166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:32:20.863734   41166 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:32:20.872360   41166 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:32:20.872401   41166 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:32:20.880727   41166 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:32:20.880752   41166 kubeadm.go:157] found existing configuration files:
	
	I1009 18:32:20.880802   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1009 18:32:20.888758   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:32:20.888797   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:32:20.896370   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1009 18:32:20.904128   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:32:20.904188   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:32:20.911725   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1009 18:32:20.919740   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:32:20.919783   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:32:20.927592   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1009 18:32:20.935300   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:32:20.935348   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:32:20.942573   41166 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:32:20.998838   41166 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:32:21.055610   41166 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:36:23.829821   41166 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1009 18:36:23.829939   41166 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:36:23.832833   41166 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:36:23.832899   41166 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:36:23.833001   41166 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:36:23.833078   41166 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:36:23.833131   41166 kubeadm.go:318] OS: Linux
	I1009 18:36:23.833211   41166 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:36:23.833255   41166 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:36:23.833293   41166 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:36:23.833332   41166 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:36:23.833371   41166 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:36:23.833408   41166 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:36:23.833452   41166 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:36:23.833487   41166 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:36:23.833563   41166 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:36:23.833644   41166 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:36:23.833715   41166 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:36:23.833763   41166 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:36:23.836738   41166 out.go:252]   - Generating certificates and keys ...
	I1009 18:36:23.836809   41166 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:36:23.836876   41166 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:36:23.836946   41166 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 18:36:23.836995   41166 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 18:36:23.837054   41166 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 18:36:23.837106   41166 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 18:36:23.837180   41166 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 18:36:23.837230   41166 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 18:36:23.837295   41166 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 18:36:23.837361   41166 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 18:36:23.837391   41166 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 18:36:23.837444   41166 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:36:23.837485   41166 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:36:23.837544   41166 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:36:23.837590   41166 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:36:23.837644   41166 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:36:23.837687   41166 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:36:23.837754   41166 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:36:23.837807   41166 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:36:23.840574   41166 out.go:252]   - Booting up control plane ...
	I1009 18:36:23.840651   41166 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:36:23.840709   41166 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:36:23.840759   41166 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:36:23.840847   41166 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:36:23.840933   41166 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:36:23.841023   41166 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:36:23.841122   41166 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:36:23.841176   41166 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:36:23.841286   41166 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:36:23.841382   41166 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:36:23.841430   41166 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500920961s
	I1009 18:36:23.841508   41166 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:36:23.841575   41166 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1009 18:36:23.841650   41166 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:36:23.841721   41166 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:36:23.841779   41166 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000193088s
	I1009 18:36:23.841844   41166 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000216272s
	I1009 18:36:23.841921   41166 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000612564s
	I1009 18:36:23.841927   41166 kubeadm.go:318] 
	I1009 18:36:23.842001   41166 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:36:23.842071   41166 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:36:23.842160   41166 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:36:23.842237   41166 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:36:23.842297   41166 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:36:23.842366   41166 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:36:23.842394   41166 kubeadm.go:318] 
	W1009 18:36:23.842478   41166 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500920961s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000193088s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000216272s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000612564s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 18:36:23.842555   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 18:36:24.285465   41166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:36:24.298222   41166 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:36:24.298276   41166 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:36:24.306625   41166 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:36:24.306635   41166 kubeadm.go:157] found existing configuration files:
	
	I1009 18:36:24.306675   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1009 18:36:24.314710   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:36:24.314750   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:36:24.322418   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1009 18:36:24.330123   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:36:24.330187   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:36:24.337953   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1009 18:36:24.346125   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:36:24.346179   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:36:24.354153   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1009 18:36:24.362094   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:36:24.362133   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:36:24.369784   41166 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:36:24.426834   41166 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:36:24.485641   41166 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:40:27.797583   41166 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 18:40:27.797662   41166 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:40:27.800620   41166 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:40:27.800659   41166 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:40:27.800736   41166 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:40:27.800783   41166 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:40:27.800811   41166 kubeadm.go:318] OS: Linux
	I1009 18:40:27.800847   41166 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:40:27.800885   41166 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:40:27.800924   41166 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:40:27.800985   41166 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:40:27.801052   41166 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:40:27.801090   41166 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:40:27.801156   41166 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:40:27.801201   41166 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:40:27.801265   41166 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:40:27.801343   41166 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:40:27.801412   41166 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:40:27.801484   41166 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:40:27.805055   41166 out.go:252]   - Generating certificates and keys ...
	I1009 18:40:27.805120   41166 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:40:27.805218   41166 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:40:27.805293   41166 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 18:40:27.805339   41166 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 18:40:27.805412   41166 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 18:40:27.805457   41166 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 18:40:27.805510   41166 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 18:40:27.805564   41166 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 18:40:27.805620   41166 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 18:40:27.805693   41166 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 18:40:27.805748   41166 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 18:40:27.805808   41166 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:40:27.805852   41166 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:40:27.805907   41166 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:40:27.805950   41166 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:40:27.805998   41166 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:40:27.806045   41166 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:40:27.806113   41166 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:40:27.806212   41166 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:40:27.807603   41166 out.go:252]   - Booting up control plane ...
	I1009 18:40:27.807673   41166 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:40:27.807748   41166 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:40:27.807805   41166 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:40:27.807888   41166 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:40:27.807967   41166 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:40:27.808054   41166 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:40:27.808118   41166 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:40:27.808182   41166 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:40:27.808282   41166 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:40:27.808373   41166 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:40:27.808424   41166 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000969803s
	I1009 18:40:27.808512   41166 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:40:27.808585   41166 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1009 18:40:27.808667   41166 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:40:27.808740   41166 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:40:27.808798   41166 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000410729s
	I1009 18:40:27.808855   41166 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000637307s
	I1009 18:40:27.808919   41166 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000528535s
	I1009 18:40:27.808921   41166 kubeadm.go:318] 
	I1009 18:40:27.808989   41166 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:40:27.809052   41166 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:40:27.809124   41166 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:40:27.809239   41166 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:40:27.809297   41166 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:40:27.809386   41166 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:40:27.809399   41166 kubeadm.go:318] 
	I1009 18:40:27.809438   41166 kubeadm.go:402] duration metric: took 12m10.090749097s to StartCluster
	I1009 18:40:27.809468   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:40:27.809513   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:40:27.837743   41166 cri.go:89] found id: ""
	I1009 18:40:27.837757   41166 logs.go:282] 0 containers: []
	W1009 18:40:27.837763   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:40:27.837768   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:40:27.837814   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:40:27.863718   41166 cri.go:89] found id: ""
	I1009 18:40:27.863732   41166 logs.go:282] 0 containers: []
	W1009 18:40:27.863738   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:40:27.863748   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:40:27.863792   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:40:27.889900   41166 cri.go:89] found id: ""
	I1009 18:40:27.889914   41166 logs.go:282] 0 containers: []
	W1009 18:40:27.889920   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:40:27.889924   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:40:27.889980   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:40:27.916941   41166 cri.go:89] found id: ""
	I1009 18:40:27.916954   41166 logs.go:282] 0 containers: []
	W1009 18:40:27.916960   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:40:27.916965   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:40:27.917024   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:40:27.943791   41166 cri.go:89] found id: ""
	I1009 18:40:27.943804   41166 logs.go:282] 0 containers: []
	W1009 18:40:27.943809   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:40:27.943814   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:40:27.943860   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:40:27.970612   41166 cri.go:89] found id: ""
	I1009 18:40:27.970625   41166 logs.go:282] 0 containers: []
	W1009 18:40:27.970631   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:40:27.970635   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:40:27.970683   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:40:27.997688   41166 cri.go:89] found id: ""
	I1009 18:40:27.997700   41166 logs.go:282] 0 containers: []
	W1009 18:40:27.997706   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:40:27.997713   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:40:27.997721   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:40:28.064711   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:40:28.064730   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:40:28.076960   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:40:28.076978   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:40:28.135195   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:40:28.128400   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.128940   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.130597   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.131014   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.132350   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:40:28.128400   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.128940   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.130597   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.131014   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.132350   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:40:28.135206   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:40:28.135216   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:40:28.194198   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:40:28.194216   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 18:40:28.224308   41166 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000969803s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000410729s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000637307s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000528535s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 18:40:28.224355   41166 out.go:285] * 
	W1009 18:40:28.224482   41166 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000969803s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000410729s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000637307s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000528535s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:40:28.224505   41166 out.go:285] * 
	W1009 18:40:28.226335   41166 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:40:28.230950   41166 out.go:203] 
	W1009 18:40:28.232526   41166 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000969803s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000410729s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000637307s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000528535s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:40:28.232549   41166 out.go:285] * 
	I1009 18:40:28.235189   41166 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.543131713Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.543583723Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.544482915Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.544937894Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.561232429Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=1ded6b43-d118-4b70-8e5b-dd4aabd427f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.562513475Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8c417e3f-7b5d-44f6-8082-13c142c8285b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.562718874Z" level=info msg="createCtr: deleting container ID 5089e63580fa138163a5434d6774e70806fd3b2b61a6691fd756e551d2db1984 from idIndex" id=1ded6b43-d118-4b70-8e5b-dd4aabd427f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.562744364Z" level=info msg="createCtr: removing container 5089e63580fa138163a5434d6774e70806fd3b2b61a6691fd756e551d2db1984" id=1ded6b43-d118-4b70-8e5b-dd4aabd427f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.562773215Z" level=info msg="createCtr: deleting container 5089e63580fa138163a5434d6774e70806fd3b2b61a6691fd756e551d2db1984 from storage" id=1ded6b43-d118-4b70-8e5b-dd4aabd427f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.563961674Z" level=info msg="createCtr: deleting container ID d0f3203170f1bf851cc5c3e7e264334abf2f4f7569a6b5394a7218431338d323 from idIndex" id=8c417e3f-7b5d-44f6-8082-13c142c8285b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.564064963Z" level=info msg="createCtr: removing container d0f3203170f1bf851cc5c3e7e264334abf2f4f7569a6b5394a7218431338d323" id=8c417e3f-7b5d-44f6-8082-13c142c8285b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.564114864Z" level=info msg="createCtr: deleting container d0f3203170f1bf851cc5c3e7e264334abf2f4f7569a6b5394a7218431338d323 from storage" id=8c417e3f-7b5d-44f6-8082-13c142c8285b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.56610003Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-753440_kube-system_c3332277da3037b9d30e61510b9fdccb_0" id=1ded6b43-d118-4b70-8e5b-dd4aabd427f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.566508491Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-753440_kube-system_0d946ec5c615de29dae011722e300735_0" id=8c417e3f-7b5d-44f6-8082-13c142c8285b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:29 functional-753440 crio[5806]: time="2025-10-09T18:40:29.536705355Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=13df285b-7387-4f01-937e-611c409808fa name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:29 functional-753440 crio[5806]: time="2025-10-09T18:40:29.537772337Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=b2ef6457-a8de-44bf-9645-e025765a3571 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:29 functional-753440 crio[5806]: time="2025-10-09T18:40:29.538868775Z" level=info msg="Creating container: kube-system/etcd-functional-753440/etcd" id=38ca3084-3e46-45a5-bcc8-36519726e888 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:29 functional-753440 crio[5806]: time="2025-10-09T18:40:29.539098973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:29 functional-753440 crio[5806]: time="2025-10-09T18:40:29.54282272Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:29 functional-753440 crio[5806]: time="2025-10-09T18:40:29.54340808Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:29 functional-753440 crio[5806]: time="2025-10-09T18:40:29.558070772Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=38ca3084-3e46-45a5-bcc8-36519726e888 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:29 functional-753440 crio[5806]: time="2025-10-09T18:40:29.559965846Z" level=info msg="createCtr: deleting container ID a06ac9363965b653d64f09237aa7b9409e3fbd97a9719eef8873b5e27c9a2291 from idIndex" id=38ca3084-3e46-45a5-bcc8-36519726e888 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:29 functional-753440 crio[5806]: time="2025-10-09T18:40:29.56001007Z" level=info msg="createCtr: removing container a06ac9363965b653d64f09237aa7b9409e3fbd97a9719eef8873b5e27c9a2291" id=38ca3084-3e46-45a5-bcc8-36519726e888 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:29 functional-753440 crio[5806]: time="2025-10-09T18:40:29.560045273Z" level=info msg="createCtr: deleting container a06ac9363965b653d64f09237aa7b9409e3fbd97a9719eef8873b5e27c9a2291 from storage" id=38ca3084-3e46-45a5-bcc8-36519726e888 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:29 functional-753440 crio[5806]: time="2025-10-09T18:40:29.562455923Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-753440_kube-system_894f77eb6f96f2cc2bf4bdca611e7cdb_0" id=38ca3084-3e46-45a5-bcc8-36519726e888 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:40:31.304309   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:31.304804   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:31.306473   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:31.306979   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:31.308241   15825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:40:31 up  1:22,  0 user,  load average: 0.12, 0.06, 0.07
	Linux functional-753440 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 18:40:27 functional-753440 kubelet[14909]:  > podSandboxID="7a4353736f4a4433982204579f641a25b7ce51b570588adf77ed233c5025e9dc"
	Oct 09 18:40:27 functional-753440 kubelet[14909]: E1009 18:40:27.566505   14909 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:40:27 functional-753440 kubelet[14909]:         container kube-scheduler start failed in pod kube-scheduler-functional-753440_kube-system(c3332277da3037b9d30e61510b9fdccb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:27 functional-753440 kubelet[14909]:  > logger="UnhandledError"
	Oct 09 18:40:27 functional-753440 kubelet[14909]: E1009 18:40:27.566536   14909 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-753440" podUID="c3332277da3037b9d30e61510b9fdccb"
	Oct 09 18:40:27 functional-753440 kubelet[14909]: E1009 18:40:27.566767   14909 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:40:27 functional-753440 kubelet[14909]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:27 functional-753440 kubelet[14909]:  > podSandboxID="6fa88d0d4dd2687a2039db7efc159391e5e7ed9ab6f5700abe409768183910fe"
	Oct 09 18:40:27 functional-753440 kubelet[14909]: E1009 18:40:27.566838   14909 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:40:27 functional-753440 kubelet[14909]:         container kube-apiserver start failed in pod kube-apiserver-functional-753440_kube-system(0d946ec5c615de29dae011722e300735): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:27 functional-753440 kubelet[14909]:  > logger="UnhandledError"
	Oct 09 18:40:27 functional-753440 kubelet[14909]: E1009 18:40:27.567563   14909 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-753440" podUID="0d946ec5c615de29dae011722e300735"
	Oct 09 18:40:28 functional-753440 kubelet[14909]: E1009 18:40:28.847450   14909 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 09 18:40:29 functional-753440 kubelet[14909]: E1009 18:40:29.536187   14909 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753440\" not found" node="functional-753440"
	Oct 09 18:40:29 functional-753440 kubelet[14909]: E1009 18:40:29.564042   14909 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:40:29 functional-753440 kubelet[14909]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:29 functional-753440 kubelet[14909]:  > podSandboxID="7e16b1bb2bf2df093cc66fa197bd5344740cdfe9b099dcd26ba3fc1c3435b769"
	Oct 09 18:40:29 functional-753440 kubelet[14909]: E1009 18:40:29.564174   14909 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:40:29 functional-753440 kubelet[14909]:         container etcd start failed in pod etcd-functional-753440_kube-system(894f77eb6f96f2cc2bf4bdca611e7cdb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:29 functional-753440 kubelet[14909]:  > logger="UnhandledError"
	Oct 09 18:40:29 functional-753440 kubelet[14909]: E1009 18:40:29.564212   14909 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-753440" podUID="894f77eb6f96f2cc2bf4bdca611e7cdb"
	Oct 09 18:40:31 functional-753440 kubelet[14909]: E1009 18:40:31.159164   14909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-753440?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 09 18:40:31 functional-753440 kubelet[14909]: I1009 18:40:31.315674   14909 kubelet_node_status.go:75] "Attempting to register node" node="functional-753440"
	Oct 09 18:40:31 functional-753440 kubelet[14909]: E1009 18:40:31.316034   14909 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-753440"
	Oct 09 18:40:31 functional-753440 kubelet[14909]: E1009 18:40:31.344233   14909 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-753440.186ce67effdfc72b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-753440,UID:functional-753440,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-753440 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-753440,},FirstTimestamp:2025-10-09 18:36:27.528144683 +0000 UTC m=+0.734831963,LastTimestamp:2025-10-09 18:36:27.528144683 +0000 UTC m=+0.734831963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-753440,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753440 -n functional-753440
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753440 -n functional-753440: exit status 2 (306.161435ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753440" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (1.90s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-753440 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-753440 apply -f testdata/invalidsvc.yaml: exit status 1 (52.846643ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-753440 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-753440 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-753440 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-753440 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-753440 --alsologtostderr -v=1] stderr:
I1009 18:40:48.536559   64126 out.go:360] Setting OutFile to fd 1 ...
I1009 18:40:48.536897   64126 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:40:48.536908   64126 out.go:374] Setting ErrFile to fd 2...
I1009 18:40:48.536912   64126 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:40:48.537110   64126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
I1009 18:40:48.537415   64126 mustload.go:65] Loading cluster: functional-753440
I1009 18:40:48.537739   64126 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:40:48.538098   64126 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
I1009 18:40:48.558932   64126 host.go:66] Checking if "functional-753440" exists ...
I1009 18:40:48.559266   64126 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1009 18:40:48.621048   64126 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:40:48.609922966 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1009 18:40:48.621223   64126 api_server.go:166] Checking apiserver status ...
I1009 18:40:48.621285   64126 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1009 18:40:48.621332   64126 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
I1009 18:40:48.643131   64126 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
W1009 18:40:48.753542   64126 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1009 18:40:48.755735   64126 out.go:179] * The control-plane node functional-753440 apiserver is not running: (state=Stopped)
I1009 18:40:48.757403   64126 out.go:179]   To start a cluster, run: "minikube start -p functional-753440"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753440
helpers_test.go:243: (dbg) docker inspect functional-753440:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205",
	        "Created": "2025-10-09T18:13:38.612842612Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 29511,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:13:38.64668907Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/hostname",
	        "HostsPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/hosts",
	        "LogPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205-json.log",
	        "Name": "/functional-753440",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753440:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-753440",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205",
	                "LowerDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-753440",
	                "Source": "/var/lib/docker/volumes/functional-753440/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753440",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753440",
	                "name.minikube.sigs.k8s.io": "functional-753440",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d81e656cb7fd298b6be7b84ddafb7e6d0b2df1b9904e1c444b24eb780385409d",
	            "SandboxKey": "/var/run/docker/netns/d81e656cb7fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753440": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:52:a9:f3:ce:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d69cee380b2506f35d197ee18a95b90b110e191b547e1220873c5484ffc92ad3",
	                    "EndpointID": "2f780bc31b7359d4036c8b32e09c7f7657923ca8c46e8392506706282465c3ec",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753440",
	                        "694bf539948e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-753440 -n functional-753440
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-753440 -n functional-753440: exit status 2 (308.587581ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 logs -n 25
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image     │ functional-753440 image ls                                                                                                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ mount     │ -p functional-753440 /tmp/TestFunctionalparallelMountCmdVerifyCleanup817654199/001:/mount1 --alsologtostderr -v=1                                               │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ ssh       │ functional-753440 ssh findmnt -T /mount1                                                                                                                        │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ mount     │ -p functional-753440 /tmp/TestFunctionalparallelMountCmdVerifyCleanup817654199/001:/mount3 --alsologtostderr -v=1                                               │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ mount     │ -p functional-753440 /tmp/TestFunctionalparallelMountCmdVerifyCleanup817654199/001:/mount2 --alsologtostderr -v=1                                               │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ image     │ functional-753440 image load --daemon kicbase/echo-server:functional-753440 --alsologtostderr                                                                   │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image     │ functional-753440 image ls                                                                                                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh       │ functional-753440 ssh findmnt -T /mount1                                                                                                                        │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image     │ functional-753440 image save kicbase/echo-server:functional-753440 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh       │ functional-753440 ssh findmnt -T /mount2                                                                                                                        │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image     │ functional-753440 image rm kicbase/echo-server:functional-753440 --alsologtostderr                                                                              │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh       │ functional-753440 ssh findmnt -T /mount3                                                                                                                        │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image     │ functional-753440 image ls                                                                                                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ mount     │ -p functional-753440 --kill=true                                                                                                                                │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ image     │ functional-753440 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image     │ functional-753440 image save --daemon kicbase/echo-server:functional-753440 --alsologtostderr                                                                   │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh       │ functional-753440 ssh sudo cat /etc/ssl/certs/14880.pem                                                                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh       │ functional-753440 ssh sudo cat /usr/share/ca-certificates/14880.pem                                                                                             │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh       │ functional-753440 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh       │ functional-753440 ssh sudo cat /etc/ssl/certs/148802.pem                                                                                                        │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh       │ functional-753440 ssh sudo cat /usr/share/ca-certificates/148802.pem                                                                                            │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh       │ functional-753440 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh       │ functional-753440 ssh sudo cat /etc/test/nested/copy/14880/hosts                                                                                                │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ dashboard │ --url --port 36195 -p functional-753440 --alsologtostderr -v=1                                                                                                  │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ image     │ functional-753440 image ls --format short --alsologtostderr                                                                                                     │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:40:41
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:40:41.059621   59814 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:40:41.059885   59814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:40:41.059896   59814 out.go:374] Setting ErrFile to fd 2...
	I1009 18:40:41.059899   59814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:40:41.060215   59814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:40:41.060650   59814 out.go:368] Setting JSON to false
	I1009 18:40:41.061515   59814 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4989,"bootTime":1760030252,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:40:41.061609   59814 start.go:141] virtualization: kvm guest
	I1009 18:40:41.063781   59814 out.go:179] * [functional-753440] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1009 18:40:41.065771   59814 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:40:41.065764   59814 notify.go:220] Checking for updates...
	I1009 18:40:41.068913   59814 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:40:41.070481   59814 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:40:41.071797   59814 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:40:41.073119   59814 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:40:41.074623   59814 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:40:41.076619   59814 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:40:41.077037   59814 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:40:41.102735   59814 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:40:41.102838   59814 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:40:41.165489   59814 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:40:41.154761452 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:40:41.165636   59814 docker.go:318] overlay module found
	I1009 18:40:41.167894   59814 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1009 18:40:41.169565   59814 start.go:305] selected driver: docker
	I1009 18:40:41.169585   59814 start.go:925] validating driver "docker" against &{Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:40:41.169700   59814 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:40:41.172117   59814 out.go:203] 
	W1009 18:40:41.173651   59814 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1009 18:40:41.175097   59814 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 18:40:44 functional-753440 crio[5806]: time="2025-10-09T18:40:44.82623148Z" level=info msg="Checking image status: kicbase/echo-server:functional-753440" id=976fa83d-ab23-4f19-b44b-afd04ec7a9e3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:44 functional-753440 crio[5806]: time="2025-10-09T18:40:44.850520799Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-753440" id=2d40d6ea-f45d-4259-b781-6d4cac2194f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:44 functional-753440 crio[5806]: time="2025-10-09T18:40:44.850632738Z" level=info msg="Image docker.io/kicbase/echo-server:functional-753440 not found" id=2d40d6ea-f45d-4259-b781-6d4cac2194f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:44 functional-753440 crio[5806]: time="2025-10-09T18:40:44.850662236Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-753440 found" id=2d40d6ea-f45d-4259-b781-6d4cac2194f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:44 functional-753440 crio[5806]: time="2025-10-09T18:40:44.8758151Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-753440" id=5b3b0ae1-4a11-42e2-aaed-d29f883acbd6 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:44 functional-753440 crio[5806]: time="2025-10-09T18:40:44.875947263Z" level=info msg="Image localhost/kicbase/echo-server:functional-753440 not found" id=5b3b0ae1-4a11-42e2-aaed-d29f883acbd6 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:44 functional-753440 crio[5806]: time="2025-10-09T18:40:44.875977055Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-753440 found" id=5b3b0ae1-4a11-42e2-aaed-d29f883acbd6 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:45 functional-753440 crio[5806]: time="2025-10-09T18:40:45.627905355Z" level=info msg="Checking image status: kicbase/echo-server:functional-753440" id=69d18627-4136-4431-8c05-635fa6e2e52c name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:45 functional-753440 crio[5806]: time="2025-10-09T18:40:45.654094947Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-753440" id=f2b31bc3-fbff-4e8d-9be2-dbb89d1a45b8 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:45 functional-753440 crio[5806]: time="2025-10-09T18:40:45.654244391Z" level=info msg="Image docker.io/kicbase/echo-server:functional-753440 not found" id=f2b31bc3-fbff-4e8d-9be2-dbb89d1a45b8 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:45 functional-753440 crio[5806]: time="2025-10-09T18:40:45.654281726Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-753440 found" id=f2b31bc3-fbff-4e8d-9be2-dbb89d1a45b8 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:45 functional-753440 crio[5806]: time="2025-10-09T18:40:45.680627494Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-753440" id=05b08adb-5802-4c34-8620-5dfc4da1ad5f name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:45 functional-753440 crio[5806]: time="2025-10-09T18:40:45.680746847Z" level=info msg="Image localhost/kicbase/echo-server:functional-753440 not found" id=05b08adb-5802-4c34-8620-5dfc4da1ad5f name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:45 functional-753440 crio[5806]: time="2025-10-09T18:40:45.680775592Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-753440 found" id=05b08adb-5802-4c34-8620-5dfc4da1ad5f name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.536545286Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=9d716c6c-0b36-444d-9a43-145939f5140c name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.537509174Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=8bfc6c23-9671-46cd-b2f2-e852de7a72f4 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.538661822Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-753440/kube-controller-manager" id=09dd28a9-7698-49da-9c11-bc1bd2156e16 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.538884513Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.543014511Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.543599307Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.559854128Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=09dd28a9-7698-49da-9c11-bc1bd2156e16 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.561422863Z" level=info msg="createCtr: deleting container ID 1d85108123728577edabc2bbaf503ed235cb75b6ab86cdd9cdcfba3c8e1f5386 from idIndex" id=09dd28a9-7698-49da-9c11-bc1bd2156e16 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.561465605Z" level=info msg="createCtr: removing container 1d85108123728577edabc2bbaf503ed235cb75b6ab86cdd9cdcfba3c8e1f5386" id=09dd28a9-7698-49da-9c11-bc1bd2156e16 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.561505744Z" level=info msg="createCtr: deleting container 1d85108123728577edabc2bbaf503ed235cb75b6ab86cdd9cdcfba3c8e1f5386 from storage" id=09dd28a9-7698-49da-9c11-bc1bd2156e16 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.564276404Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-753440_kube-system_ddd5b817e547272bbbe5e6f0c16b8e98_0" id=09dd28a9-7698-49da-9c11-bc1bd2156e16 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:40:49.775771   18180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:49.776418   18180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:49.778333   18180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:49.778841   18180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:49.780628   18180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:40:49 up  1:23,  0 user,  load average: 0.57, 0.17, 0.11
	Linux functional-753440 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 18:40:40 functional-753440 kubelet[14909]: E1009 18:40:40.566440   14909 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-753440" podUID="0d946ec5c615de29dae011722e300735"
	Oct 09 18:40:41 functional-753440 kubelet[14909]: E1009 18:40:41.345009   14909 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-753440.186ce67effdfc72b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-753440,UID:functional-753440,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-753440 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-753440,},FirstTimestamp:2025-10-09 18:36:27.528144683 +0000 UTC m=+0.734831963,LastTimestamp:2025-10-09 18:36:27.528144683 +0000 UTC m=+0.734831963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-753440,}"
	Oct 09 18:40:41 functional-753440 kubelet[14909]: E1009 18:40:41.535593   14909 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753440\" not found" node="functional-753440"
	Oct 09 18:40:41 functional-753440 kubelet[14909]: E1009 18:40:41.569692   14909 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:40:41 functional-753440 kubelet[14909]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:41 functional-753440 kubelet[14909]:  > podSandboxID="7e16b1bb2bf2df093cc66fa197bd5344740cdfe9b099dcd26ba3fc1c3435b769"
	Oct 09 18:40:41 functional-753440 kubelet[14909]: E1009 18:40:41.569909   14909 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:40:41 functional-753440 kubelet[14909]:         container etcd start failed in pod etcd-functional-753440_kube-system(894f77eb6f96f2cc2bf4bdca611e7cdb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:41 functional-753440 kubelet[14909]:  > logger="UnhandledError"
	Oct 09 18:40:41 functional-753440 kubelet[14909]: E1009 18:40:41.569951   14909 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-753440" podUID="894f77eb6f96f2cc2bf4bdca611e7cdb"
	Oct 09 18:40:43 functional-753440 kubelet[14909]: E1009 18:40:43.707727   14909 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 09 18:40:45 functional-753440 kubelet[14909]: E1009 18:40:45.161474   14909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-753440?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 09 18:40:45 functional-753440 kubelet[14909]: E1009 18:40:45.198207   14909 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 09 18:40:45 functional-753440 kubelet[14909]: I1009 18:40:45.320113   14909 kubelet_node_status.go:75] "Attempting to register node" node="functional-753440"
	Oct 09 18:40:45 functional-753440 kubelet[14909]: E1009 18:40:45.320518   14909 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-753440"
	Oct 09 18:40:46 functional-753440 kubelet[14909]: E1009 18:40:46.535997   14909 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753440\" not found" node="functional-753440"
	Oct 09 18:40:46 functional-753440 kubelet[14909]: E1009 18:40:46.564766   14909 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:40:46 functional-753440 kubelet[14909]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:46 functional-753440 kubelet[14909]:  > podSandboxID="fb34d4f739975f6378a39e225741fb0e80fac36aeda99b2080b81999ee15d221"
	Oct 09 18:40:46 functional-753440 kubelet[14909]: E1009 18:40:46.564854   14909 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:40:46 functional-753440 kubelet[14909]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-753440_kube-system(ddd5b817e547272bbbe5e6f0c16b8e98): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:46 functional-753440 kubelet[14909]:  > logger="UnhandledError"
	Oct 09 18:40:46 functional-753440 kubelet[14909]: E1009 18:40:46.564885   14909 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-753440" podUID="ddd5b817e547272bbbe5e6f0c16b8e98"
	Oct 09 18:40:47 functional-753440 kubelet[14909]: E1009 18:40:47.551723   14909 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-753440\" not found"
	Oct 09 18:40:49 functional-753440 kubelet[14909]: E1009 18:40:49.733346   14909 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753440 -n functional-753440
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753440 -n functional-753440: exit status 2 (294.402988ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753440" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (2.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753440 status: exit status 2 (337.762919ms)

                                                
                                                
-- stdout --
	functional-753440
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-amd64 -p functional-753440 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753440 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (315.763228ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Running,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-amd64 -p functional-753440 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753440 status -o json: exit status 2 (310.464412ms)

                                                
                                                
-- stdout --
	{"Name":"functional-753440","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-amd64 -p functional-753440 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753440
helpers_test.go:243: (dbg) docker inspect functional-753440:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205",
	        "Created": "2025-10-09T18:13:38.612842612Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 29511,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:13:38.64668907Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/hostname",
	        "HostsPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/hosts",
	        "LogPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205-json.log",
	        "Name": "/functional-753440",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753440:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-753440",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205",
	                "LowerDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753440",
	                "Source": "/var/lib/docker/volumes/functional-753440/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753440",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753440",
	                "name.minikube.sigs.k8s.io": "functional-753440",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d81e656cb7fd298b6be7b84ddafb7e6d0b2df1b9904e1c444b24eb780385409d",
	            "SandboxKey": "/var/run/docker/netns/d81e656cb7fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753440": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:52:a9:f3:ce:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d69cee380b2506f35d197ee18a95b90b110e191b547e1220873c5484ffc92ad3",
	                    "EndpointID": "2f780bc31b7359d4036c8b32e09c7f7657923ca8c46e8392506706282465c3ec",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753440",
	                        "694bf539948e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-753440 -n functional-753440
I1009 18:40:38.721983   14880 retry.go:31] will retry after 2.847883083s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-753440 -n functional-753440: exit status 2 (307.490653ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 logs -n 25
helpers_test.go:260: TestFunctional/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ functional-753440 kubectl -- --context functional-753440 get pods                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │                     │
	│ start   │ -p functional-753440 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                  │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │                     │
	│ config  │ functional-753440 config unset cpus                                                                                       │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ config  │ functional-753440 config get cpus                                                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ service │ functional-753440 service list                                                                                            │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ config  │ functional-753440 config set cpus 2                                                                                       │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ config  │ functional-753440 config get cpus                                                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ config  │ functional-753440 config unset cpus                                                                                       │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ config  │ functional-753440 config get cpus                                                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ ssh     │ functional-753440 ssh -n functional-753440 sudo cat /home/docker/cp-test.txt                                              │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh     │ functional-753440 ssh echo hello                                                                                          │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ service │ functional-753440 service list -o json                                                                                    │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ tunnel  │ functional-753440 tunnel --alsologtostderr                                                                                │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ tunnel  │ functional-753440 tunnel --alsologtostderr                                                                                │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ cp      │ functional-753440 cp functional-753440:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd806855305/001/cp-test.txt │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh     │ functional-753440 ssh cat /etc/hostname                                                                                   │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ service │ functional-753440 service --namespace=default --https --url hello-node                                                    │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ tunnel  │ functional-753440 tunnel --alsologtostderr                                                                                │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ ssh     │ functional-753440 ssh -n functional-753440 sudo cat /home/docker/cp-test.txt                                              │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ service │ functional-753440 service hello-node --url --format={{.IP}}                                                               │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ service │ functional-753440 service hello-node --url                                                                                │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ cp      │ functional-753440 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                 │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh     │ functional-753440 ssh -n functional-753440 sudo cat /tmp/does/not/exist/cp-test.txt                                       │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ addons  │ functional-753440 addons list                                                                                             │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ addons  │ functional-753440 addons list -o json                                                                                     │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:28:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:28:14.121358   41166 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:28:14.121581   41166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:28:14.121584   41166 out.go:374] Setting ErrFile to fd 2...
	I1009 18:28:14.121587   41166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:28:14.121762   41166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:28:14.122238   41166 out.go:368] Setting JSON to false
	I1009 18:28:14.123079   41166 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4242,"bootTime":1760030252,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:28:14.123169   41166 start.go:141] virtualization: kvm guest
	I1009 18:28:14.126034   41166 out.go:179] * [functional-753440] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:28:14.127592   41166 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:28:14.127614   41166 notify.go:220] Checking for updates...
	I1009 18:28:14.130226   41166 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:28:14.131542   41166 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:28:14.132869   41166 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:28:14.134010   41166 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:28:14.135272   41166 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:28:14.137002   41166 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:28:14.137147   41166 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:28:14.160624   41166 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:28:14.160747   41166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:28:14.216904   41166 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-09 18:28:14.207579982 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:28:14.216988   41166 docker.go:318] overlay module found
	I1009 18:28:14.218985   41166 out.go:179] * Using the docker driver based on existing profile
	I1009 18:28:14.220343   41166 start.go:305] selected driver: docker
	I1009 18:28:14.220350   41166 start.go:925] validating driver "docker" against &{Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:28:14.220421   41166 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:28:14.220493   41166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:28:14.276259   41166 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-09 18:28:14.266635533 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:28:14.276841   41166 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:28:14.276862   41166 cni.go:84] Creating CNI manager for ""
	I1009 18:28:14.276912   41166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:28:14.276975   41166 start.go:349] cluster config:
	{Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:28:14.279613   41166 out.go:179] * Starting "functional-753440" primary control-plane node in "functional-753440" cluster
	I1009 18:28:14.281054   41166 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:28:14.282608   41166 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:28:14.283987   41166 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:28:14.284021   41166 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:28:14.284028   41166 cache.go:64] Caching tarball of preloaded images
	I1009 18:28:14.284084   41166 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:28:14.284156   41166 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:28:14.284167   41166 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:28:14.284262   41166 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/config.json ...
	I1009 18:28:14.304989   41166 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 18:28:14.304998   41166 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 18:28:14.305012   41166 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:28:14.305037   41166 start.go:360] acquireMachinesLock for functional-753440: {Name:mka6dd10318522f9d68a16550e4b04812fa22004 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:28:14.305103   41166 start.go:364] duration metric: took 53.763µs to acquireMachinesLock for "functional-753440"
	I1009 18:28:14.305117   41166 start.go:96] Skipping create...Using existing machine configuration
	I1009 18:28:14.305123   41166 fix.go:54] fixHost starting: 
	I1009 18:28:14.305350   41166 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
	I1009 18:28:14.322441   41166 fix.go:112] recreateIfNeeded on functional-753440: state=Running err=<nil>
	W1009 18:28:14.322475   41166 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 18:28:14.324442   41166 out.go:252] * Updating the running docker "functional-753440" container ...
	I1009 18:28:14.324473   41166 machine.go:93] provisionDockerMachine start ...
	I1009 18:28:14.324533   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:14.341338   41166 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:14.341548   41166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:28:14.341554   41166 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:28:14.486226   41166 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753440
	
	I1009 18:28:14.486250   41166 ubuntu.go:182] provisioning hostname "functional-753440"
	I1009 18:28:14.486345   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:14.504505   41166 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:14.504708   41166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:28:14.504715   41166 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-753440 && echo "functional-753440" | sudo tee /etc/hostname
	I1009 18:28:14.659579   41166 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753440
	
	I1009 18:28:14.659644   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:14.677783   41166 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:14.677973   41166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:28:14.677983   41166 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753440' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753440/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753440' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:28:14.823918   41166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:28:14.823946   41166 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 18:28:14.823965   41166 ubuntu.go:190] setting up certificates
	I1009 18:28:14.823972   41166 provision.go:84] configureAuth start
	I1009 18:28:14.824015   41166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753440
	I1009 18:28:14.841567   41166 provision.go:143] copyHostCerts
	I1009 18:28:14.841617   41166 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 18:28:14.841630   41166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:28:14.841693   41166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 18:28:14.841773   41166 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 18:28:14.841776   41166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:28:14.841800   41166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 18:28:14.841852   41166 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 18:28:14.841854   41166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:28:14.841874   41166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 18:28:14.841914   41166 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.functional-753440 san=[127.0.0.1 192.168.49.2 functional-753440 localhost minikube]
	I1009 18:28:14.981751   41166 provision.go:177] copyRemoteCerts
	I1009 18:28:14.981793   41166 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:28:14.981823   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:14.999896   41166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:28:15.102707   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 18:28:15.120896   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 18:28:15.138889   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:28:15.156869   41166 provision.go:87] duration metric: took 332.885748ms to configureAuth
	I1009 18:28:15.156885   41166 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:28:15.157034   41166 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:28:15.157151   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:15.175195   41166 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:15.175399   41166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:28:15.175409   41166 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:28:15.452446   41166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:28:15.452465   41166 machine.go:96] duration metric: took 1.127985417s to provisionDockerMachine
	I1009 18:28:15.452477   41166 start.go:293] postStartSetup for "functional-753440" (driver="docker")
	I1009 18:28:15.452491   41166 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:28:15.452568   41166 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:28:15.452629   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:15.470937   41166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:28:15.575864   41166 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:28:15.579955   41166 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:28:15.579971   41166 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:28:15.579990   41166 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 18:28:15.580053   41166 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 18:28:15.580152   41166 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 18:28:15.580226   41166 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/test/nested/copy/14880/hosts -> hosts in /etc/test/nested/copy/14880
	I1009 18:28:15.580265   41166 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/14880
	I1009 18:28:15.588947   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:28:15.607328   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/test/nested/copy/14880/hosts --> /etc/test/nested/copy/14880/hosts (40 bytes)
	I1009 18:28:15.625331   41166 start.go:296] duration metric: took 172.840814ms for postStartSetup
	I1009 18:28:15.625414   41166 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:28:15.625450   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:15.644868   41166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:28:15.745460   41166 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:28:15.750036   41166 fix.go:56] duration metric: took 1.444904813s for fixHost
	I1009 18:28:15.750054   41166 start.go:83] releasing machines lock for "functional-753440", held for 1.444944565s
	I1009 18:28:15.750113   41166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753440
	I1009 18:28:15.768383   41166 ssh_runner.go:195] Run: cat /version.json
	I1009 18:28:15.768426   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:15.768462   41166 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:28:15.768509   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:15.787244   41166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:28:15.788794   41166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:28:15.887419   41166 ssh_runner.go:195] Run: systemctl --version
	I1009 18:28:15.939267   41166 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:28:15.975115   41166 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:28:15.980039   41166 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:28:15.980121   41166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:28:15.988843   41166 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 18:28:15.988855   41166 start.go:495] detecting cgroup driver to use...
	I1009 18:28:15.988896   41166 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:28:15.988937   41166 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:28:16.003980   41166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:28:16.017315   41166 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:28:16.017382   41166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:28:16.032779   41166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:28:16.045881   41166 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:28:16.126678   41166 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:28:16.213883   41166 docker.go:234] disabling docker service ...
	I1009 18:28:16.213927   41166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:28:16.229180   41166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:28:16.242501   41166 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:28:16.328471   41166 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:28:16.418726   41166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:28:16.432452   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:28:16.447044   41166 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:28:16.447090   41166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:16.456711   41166 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 18:28:16.456763   41166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:16.466740   41166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:16.476505   41166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:16.485804   41166 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:28:16.494457   41166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:16.504131   41166 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:16.513460   41166 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:16.522986   41166 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:28:16.531036   41166 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:28:16.539288   41166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:28:16.625799   41166 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:28:16.734227   41166 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:28:16.734392   41166 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:28:16.738753   41166 start.go:563] Will wait 60s for crictl version
	I1009 18:28:16.738810   41166 ssh_runner.go:195] Run: which crictl
	I1009 18:28:16.742485   41166 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:28:16.767659   41166 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:28:16.767722   41166 ssh_runner.go:195] Run: crio --version
	I1009 18:28:16.796602   41166 ssh_runner.go:195] Run: crio --version
	I1009 18:28:16.826463   41166 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:28:16.827844   41166 cli_runner.go:164] Run: docker network inspect functional-753440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:28:16.845122   41166 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:28:16.851283   41166 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1009 18:28:16.852593   41166 kubeadm.go:883] updating cluster {Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:28:16.852703   41166 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:28:16.852758   41166 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:28:16.885854   41166 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:28:16.885865   41166 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:28:16.885914   41166 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:28:16.911537   41166 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:28:16.911549   41166 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:28:16.911554   41166 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1009 18:28:16.911659   41166 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-753440 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:28:16.911716   41166 ssh_runner.go:195] Run: crio config
	I1009 18:28:16.959392   41166 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1009 18:28:16.959415   41166 cni.go:84] Creating CNI manager for ""
	I1009 18:28:16.959431   41166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:28:16.959447   41166 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:28:16.959474   41166 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753440 NodeName:functional-753440 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:28:16.959581   41166 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-753440"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:28:16.959637   41166 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:28:16.967720   41166 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:28:16.967786   41166 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:28:16.975557   41166 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 18:28:16.988463   41166 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:28:17.001726   41166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1009 18:28:17.014711   41166 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 18:28:17.018916   41166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:28:17.102967   41166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:28:17.116133   41166 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440 for IP: 192.168.49.2
	I1009 18:28:17.116168   41166 certs.go:195] generating shared ca certs ...
	I1009 18:28:17.116186   41166 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:17.116310   41166 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 18:28:17.116344   41166 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 18:28:17.116350   41166 certs.go:257] generating profile certs ...
	I1009 18:28:17.116439   41166 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.key
	I1009 18:28:17.116473   41166 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.key.01289d3a
	I1009 18:28:17.116504   41166 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.key
	I1009 18:28:17.116599   41166 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 18:28:17.116623   41166 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 18:28:17.116628   41166 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:28:17.116647   41166 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 18:28:17.116699   41166 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:28:17.116718   41166 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 18:28:17.116754   41166 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:28:17.117319   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:28:17.135881   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:28:17.153983   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:28:17.171867   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:28:17.189721   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 18:28:17.208056   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:28:17.226995   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:28:17.245251   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:28:17.263239   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 18:28:17.281041   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 18:28:17.298701   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:28:17.316541   41166 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:28:17.329669   41166 ssh_runner.go:195] Run: openssl version
	I1009 18:28:17.335820   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:28:17.344631   41166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:17.348564   41166 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:17.348610   41166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:17.382973   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:28:17.391446   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 18:28:17.399936   41166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 18:28:17.403644   41166 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:28:17.403697   41166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 18:28:17.438115   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 18:28:17.446527   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 18:28:17.455201   41166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 18:28:17.459043   41166 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:28:17.459093   41166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 18:28:17.494448   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:28:17.503208   41166 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:28:17.507381   41166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 18:28:17.542560   41166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 18:28:17.577279   41166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 18:28:17.612414   41166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 18:28:17.648669   41166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 18:28:17.684353   41166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 18:28:17.718697   41166 kubeadm.go:400] StartCluster: {Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:28:17.718762   41166 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:28:17.718816   41166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:28:17.747722   41166 cri.go:89] found id: ""
	I1009 18:28:17.747771   41166 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:28:17.755951   41166 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 18:28:17.755970   41166 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 18:28:17.756013   41166 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 18:28:17.763739   41166 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:28:17.764201   41166 kubeconfig.go:125] found "functional-753440" server: "https://192.168.49.2:8441"
	I1009 18:28:17.765394   41166 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 18:28:17.773512   41166 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-09 18:13:46.132659514 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-09 18:28:17.012910366 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1009 18:28:17.773526   41166 kubeadm.go:1160] stopping kube-system containers ...
	I1009 18:28:17.773536   41166 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 18:28:17.773573   41166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:28:17.801424   41166 cri.go:89] found id: ""
	I1009 18:28:17.801491   41166 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 18:28:17.844900   41166 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:28:17.853365   41166 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5635 Oct  9 18:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct  9 18:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct  9 18:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct  9 18:17 /etc/kubernetes/scheduler.conf
	
	I1009 18:28:17.853413   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1009 18:28:17.861284   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1009 18:28:17.869531   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:28:17.869582   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:28:17.877552   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1009 18:28:17.885384   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:28:17.885430   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:28:17.893514   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1009 18:28:17.901554   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:28:17.901605   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:28:17.910046   41166 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:28:17.918503   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:28:17.960612   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:28:19.029109   41166 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.068473628s)
	I1009 18:28:19.029180   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:28:19.195034   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:28:19.243702   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:28:19.294305   41166 api_server.go:52] waiting for apiserver process to appear ...
	I1009 18:28:19.294364   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:19.794527   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:20.295201   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:20.794575   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:21.295315   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:21.795156   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:22.294825   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:22.794676   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:23.295341   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:23.795290   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:24.295084   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:24.794558   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:25.295301   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:25.794886   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:26.295362   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:26.795204   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:27.295068   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:27.794501   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:28.295278   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:28.795020   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:29.294945   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:29.795382   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:30.294824   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:30.794608   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:31.295203   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:31.795244   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:32.294545   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:32.794712   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:33.294432   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:33.795152   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:34.294924   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:34.794572   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:35.295260   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:35.794912   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:36.294546   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:36.795240   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:37.294721   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:37.794468   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:38.295324   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:38.795118   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:39.295123   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:39.795377   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:40.294883   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:40.795163   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:41.294810   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:41.794568   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:42.295334   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:42.795216   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:43.294867   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:43.794631   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:44.294584   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:44.795416   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:45.294988   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:45.795459   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:46.295344   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:46.794912   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:47.294535   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:47.795297   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:48.294813   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:48.794435   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:49.295044   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:49.794820   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:50.294561   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:50.795171   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:51.295301   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:51.794820   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:52.295356   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:52.795166   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:53.294824   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:53.795465   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:54.295177   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:54.794443   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:55.294528   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:55.794977   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:56.294481   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:56.795276   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:57.295436   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:57.795235   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:58.294498   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:58.794950   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:59.294720   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:59.794600   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:00.295262   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:00.794624   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:01.294757   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:01.794835   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:02.294745   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:02.795101   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:03.295356   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:03.794515   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:04.294776   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:04.794940   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:05.295069   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:05.794648   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:06.294527   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:06.794749   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:07.294659   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:07.795339   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:08.295340   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:08.795175   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:09.294617   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:09.795133   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:10.295346   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:10.795313   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:11.295322   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:11.794750   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:12.294795   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:12.794516   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:13.295074   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:13.794456   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:14.294872   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:14.794437   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:15.294584   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:15.794709   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:16.295308   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:16.795334   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:17.294662   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:17.795191   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:18.294594   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:18.794871   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:19.295378   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:19.295433   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:19.321387   41166 cri.go:89] found id: ""
	I1009 18:29:19.321402   41166 logs.go:282] 0 containers: []
	W1009 18:29:19.321411   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:19.321418   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:19.321468   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:19.348366   41166 cri.go:89] found id: ""
	I1009 18:29:19.348380   41166 logs.go:282] 0 containers: []
	W1009 18:29:19.348387   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:19.348391   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:19.348435   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:19.374894   41166 cri.go:89] found id: ""
	I1009 18:29:19.374906   41166 logs.go:282] 0 containers: []
	W1009 18:29:19.374912   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:19.374916   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:19.374955   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:19.401088   41166 cri.go:89] found id: ""
	I1009 18:29:19.401106   41166 logs.go:282] 0 containers: []
	W1009 18:29:19.401114   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:19.401121   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:19.401191   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:19.428021   41166 cri.go:89] found id: ""
	I1009 18:29:19.428033   41166 logs.go:282] 0 containers: []
	W1009 18:29:19.428043   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:19.428047   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:19.428105   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:19.454576   41166 cri.go:89] found id: ""
	I1009 18:29:19.454590   41166 logs.go:282] 0 containers: []
	W1009 18:29:19.454595   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:19.454599   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:19.454639   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:19.480743   41166 cri.go:89] found id: ""
	I1009 18:29:19.480760   41166 logs.go:282] 0 containers: []
	W1009 18:29:19.480767   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:19.480774   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:19.480783   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:19.509728   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:19.509743   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:19.578764   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:19.578781   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:19.590528   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:19.590544   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:19.646752   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:19.639577    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.640309    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.641990    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.642451    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.643983    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:19.639577    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.640309    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.641990    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.642451    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.643983    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:19.646773   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:19.646784   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:22.208868   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:22.219498   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:22.219549   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:22.245808   41166 cri.go:89] found id: ""
	I1009 18:29:22.245825   41166 logs.go:282] 0 containers: []
	W1009 18:29:22.245833   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:22.245839   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:22.245884   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:22.271240   41166 cri.go:89] found id: ""
	I1009 18:29:22.271253   41166 logs.go:282] 0 containers: []
	W1009 18:29:22.271259   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:22.271263   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:22.271301   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:22.299626   41166 cri.go:89] found id: ""
	I1009 18:29:22.299641   41166 logs.go:282] 0 containers: []
	W1009 18:29:22.299650   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:22.299656   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:22.299699   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:22.326461   41166 cri.go:89] found id: ""
	I1009 18:29:22.326473   41166 logs.go:282] 0 containers: []
	W1009 18:29:22.326479   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:22.326484   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:22.326526   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:22.352237   41166 cri.go:89] found id: ""
	I1009 18:29:22.352253   41166 logs.go:282] 0 containers: []
	W1009 18:29:22.352264   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:22.352268   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:22.352316   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:22.378255   41166 cri.go:89] found id: ""
	I1009 18:29:22.378268   41166 logs.go:282] 0 containers: []
	W1009 18:29:22.378276   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:22.378297   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:22.378351   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:22.403983   41166 cri.go:89] found id: ""
	I1009 18:29:22.403999   41166 logs.go:282] 0 containers: []
	W1009 18:29:22.404006   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:22.404013   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:22.404024   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:22.470710   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:22.470727   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:22.482584   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:22.482599   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:22.536359   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:22.529981    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.530412    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.531972    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.532353    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.533814    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:22.529981    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.530412    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.531972    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.532353    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.533814    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:22.536380   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:22.536394   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:22.601517   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:22.601533   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:25.128918   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:25.139722   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:25.139766   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:25.165463   41166 cri.go:89] found id: ""
	I1009 18:29:25.165478   41166 logs.go:282] 0 containers: []
	W1009 18:29:25.165486   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:25.165490   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:25.165537   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:25.190387   41166 cri.go:89] found id: ""
	I1009 18:29:25.190400   41166 logs.go:282] 0 containers: []
	W1009 18:29:25.190407   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:25.190411   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:25.190460   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:25.216675   41166 cri.go:89] found id: ""
	I1009 18:29:25.216690   41166 logs.go:282] 0 containers: []
	W1009 18:29:25.216698   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:25.216703   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:25.216747   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:25.242179   41166 cri.go:89] found id: ""
	I1009 18:29:25.242191   41166 logs.go:282] 0 containers: []
	W1009 18:29:25.242197   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:25.242202   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:25.242248   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:25.267486   41166 cri.go:89] found id: ""
	I1009 18:29:25.267502   41166 logs.go:282] 0 containers: []
	W1009 18:29:25.267511   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:25.267517   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:25.267568   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:25.297914   41166 cri.go:89] found id: ""
	I1009 18:29:25.297930   41166 logs.go:282] 0 containers: []
	W1009 18:29:25.297939   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:25.297945   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:25.298000   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:25.328702   41166 cri.go:89] found id: ""
	I1009 18:29:25.328718   41166 logs.go:282] 0 containers: []
	W1009 18:29:25.328727   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:25.328736   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:25.328747   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:25.395115   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:25.395130   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:25.407227   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:25.407245   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:25.462374   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:25.455561    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.456085    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.457650    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.458100    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.459563    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:25.455561    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.456085    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.457650    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.458100    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.459563    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:25.462400   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:25.462410   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:25.525388   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:25.525409   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:28.053225   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:28.063873   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:28.063918   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:28.088014   41166 cri.go:89] found id: ""
	I1009 18:29:28.088030   41166 logs.go:282] 0 containers: []
	W1009 18:29:28.088038   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:28.088045   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:28.088091   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:28.114133   41166 cri.go:89] found id: ""
	I1009 18:29:28.114163   41166 logs.go:282] 0 containers: []
	W1009 18:29:28.114172   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:28.114177   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:28.114221   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:28.138995   41166 cri.go:89] found id: ""
	I1009 18:29:28.139007   41166 logs.go:282] 0 containers: []
	W1009 18:29:28.139017   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:28.139022   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:28.139072   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:28.163909   41166 cri.go:89] found id: ""
	I1009 18:29:28.163925   41166 logs.go:282] 0 containers: []
	W1009 18:29:28.163984   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:28.163991   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:28.164032   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:28.190078   41166 cri.go:89] found id: ""
	I1009 18:29:28.190091   41166 logs.go:282] 0 containers: []
	W1009 18:29:28.190096   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:28.190101   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:28.190171   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:28.215236   41166 cri.go:89] found id: ""
	I1009 18:29:28.215251   41166 logs.go:282] 0 containers: []
	W1009 18:29:28.215260   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:28.215265   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:28.215315   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:28.241659   41166 cri.go:89] found id: ""
	I1009 18:29:28.241675   41166 logs.go:282] 0 containers: []
	W1009 18:29:28.241684   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:28.241692   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:28.241701   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:28.312258   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:28.312275   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:28.323979   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:28.323994   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:28.380524   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:28.373568    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.374186    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.375759    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.376203    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.377825    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:28.373568    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.374186    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.375759    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.376203    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.377825    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:28.380538   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:28.380547   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:28.442571   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:28.442588   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:30.972438   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:30.983019   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:30.983078   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:31.007563   41166 cri.go:89] found id: ""
	I1009 18:29:31.007577   41166 logs.go:282] 0 containers: []
	W1009 18:29:31.007585   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:31.007591   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:31.007665   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:31.033297   41166 cri.go:89] found id: ""
	I1009 18:29:31.033312   41166 logs.go:282] 0 containers: []
	W1009 18:29:31.033320   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:31.033326   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:31.033381   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:31.058733   41166 cri.go:89] found id: ""
	I1009 18:29:31.058748   41166 logs.go:282] 0 containers: []
	W1009 18:29:31.058756   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:31.058761   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:31.058815   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:31.084119   41166 cri.go:89] found id: ""
	I1009 18:29:31.084133   41166 logs.go:282] 0 containers: []
	W1009 18:29:31.084156   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:31.084162   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:31.084206   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:31.109429   41166 cri.go:89] found id: ""
	I1009 18:29:31.109442   41166 logs.go:282] 0 containers: []
	W1009 18:29:31.109448   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:31.109452   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:31.109510   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:31.135299   41166 cri.go:89] found id: ""
	I1009 18:29:31.135312   41166 logs.go:282] 0 containers: []
	W1009 18:29:31.135322   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:31.135328   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:31.135413   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:31.162606   41166 cri.go:89] found id: ""
	I1009 18:29:31.162621   41166 logs.go:282] 0 containers: []
	W1009 18:29:31.162636   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:31.162643   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:31.162652   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:31.230506   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:31.230556   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:31.241809   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:31.241825   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:31.297388   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:31.290563    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.291088    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.292644    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.293059    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.294666    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:31.290563    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.291088    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.292644    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.293059    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.294666    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:31.297398   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:31.297413   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:31.361486   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:31.361502   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:33.891238   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:33.902005   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:33.902060   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:33.927598   41166 cri.go:89] found id: ""
	I1009 18:29:33.927612   41166 logs.go:282] 0 containers: []
	W1009 18:29:33.927618   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:33.927622   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:33.927673   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:33.952038   41166 cri.go:89] found id: ""
	I1009 18:29:33.952053   41166 logs.go:282] 0 containers: []
	W1009 18:29:33.952061   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:33.952066   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:33.952145   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:33.976526   41166 cri.go:89] found id: ""
	I1009 18:29:33.976541   41166 logs.go:282] 0 containers: []
	W1009 18:29:33.976549   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:33.976556   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:33.976610   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:34.003219   41166 cri.go:89] found id: ""
	I1009 18:29:34.003234   41166 logs.go:282] 0 containers: []
	W1009 18:29:34.003242   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:34.003247   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:34.003330   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:34.029762   41166 cri.go:89] found id: ""
	I1009 18:29:34.029775   41166 logs.go:282] 0 containers: []
	W1009 18:29:34.029781   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:34.029785   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:34.029840   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:34.054085   41166 cri.go:89] found id: ""
	I1009 18:29:34.054097   41166 logs.go:282] 0 containers: []
	W1009 18:29:34.054107   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:34.054112   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:34.054179   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:34.080890   41166 cri.go:89] found id: ""
	I1009 18:29:34.080903   41166 logs.go:282] 0 containers: []
	W1009 18:29:34.080909   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:34.080915   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:34.080926   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:34.110411   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:34.110426   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:34.181234   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:34.181254   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:34.192758   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:34.192772   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:34.248477   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:34.241375    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.241950    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.243535    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.244000    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.245566    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:34.241375    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.241950    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.243535    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.244000    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.245566    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:34.248486   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:34.248496   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:36.816158   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:36.827291   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:36.827356   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:36.851760   41166 cri.go:89] found id: ""
	I1009 18:29:36.851775   41166 logs.go:282] 0 containers: []
	W1009 18:29:36.851783   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:36.851789   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:36.851843   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:36.877217   41166 cri.go:89] found id: ""
	I1009 18:29:36.877231   41166 logs.go:282] 0 containers: []
	W1009 18:29:36.877238   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:36.877243   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:36.877284   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:36.902388   41166 cri.go:89] found id: ""
	I1009 18:29:36.902401   41166 logs.go:282] 0 containers: []
	W1009 18:29:36.902407   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:36.902411   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:36.902450   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:36.927658   41166 cri.go:89] found id: ""
	I1009 18:29:36.927673   41166 logs.go:282] 0 containers: []
	W1009 18:29:36.927679   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:36.927683   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:36.927735   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:36.952663   41166 cri.go:89] found id: ""
	I1009 18:29:36.952681   41166 logs.go:282] 0 containers: []
	W1009 18:29:36.952688   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:36.952692   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:36.952731   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:36.977753   41166 cri.go:89] found id: ""
	I1009 18:29:36.977768   41166 logs.go:282] 0 containers: []
	W1009 18:29:36.977774   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:36.977779   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:36.977819   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:37.002782   41166 cri.go:89] found id: ""
	I1009 18:29:37.002796   41166 logs.go:282] 0 containers: []
	W1009 18:29:37.002801   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:37.002807   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:37.002816   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:37.069710   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:37.069726   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:37.081854   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:37.081876   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:37.136826   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:37.130447    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.130883    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.132410    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.132756    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.134175    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:37.130447    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.130883    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.132410    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.132756    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.134175    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:37.136835   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:37.136844   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:37.201251   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:37.201270   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:39.729692   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:39.740542   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:39.740597   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:39.766240   41166 cri.go:89] found id: ""
	I1009 18:29:39.766255   41166 logs.go:282] 0 containers: []
	W1009 18:29:39.766263   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:39.766269   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:39.766330   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:39.792273   41166 cri.go:89] found id: ""
	I1009 18:29:39.792289   41166 logs.go:282] 0 containers: []
	W1009 18:29:39.792298   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:39.792304   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:39.792360   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:39.818498   41166 cri.go:89] found id: ""
	I1009 18:29:39.818513   41166 logs.go:282] 0 containers: []
	W1009 18:29:39.818521   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:39.818526   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:39.818580   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:39.844118   41166 cri.go:89] found id: ""
	I1009 18:29:39.844131   41166 logs.go:282] 0 containers: []
	W1009 18:29:39.844155   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:39.844161   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:39.844204   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:39.870849   41166 cri.go:89] found id: ""
	I1009 18:29:39.870862   41166 logs.go:282] 0 containers: []
	W1009 18:29:39.870868   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:39.870872   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:39.870911   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:39.896931   41166 cri.go:89] found id: ""
	I1009 18:29:39.896944   41166 logs.go:282] 0 containers: []
	W1009 18:29:39.896949   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:39.896954   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:39.896996   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:39.923519   41166 cri.go:89] found id: ""
	I1009 18:29:39.923531   41166 logs.go:282] 0 containers: []
	W1009 18:29:39.923537   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:39.923544   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:39.923553   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:39.990863   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:39.990880   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:40.002519   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:40.002534   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:40.059328   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:40.052153    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.052750    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.054419    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.054856    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.056426    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:40.052153    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.052750    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.054419    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.054856    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.056426    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:40.059339   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:40.059349   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:40.125328   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:40.125345   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:42.656004   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:42.666452   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:42.666495   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:42.691012   41166 cri.go:89] found id: ""
	I1009 18:29:42.691027   41166 logs.go:282] 0 containers: []
	W1009 18:29:42.691037   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:42.691043   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:42.691086   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:42.715311   41166 cri.go:89] found id: ""
	I1009 18:29:42.715327   41166 logs.go:282] 0 containers: []
	W1009 18:29:42.715335   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:42.715346   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:42.715385   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:42.741564   41166 cri.go:89] found id: ""
	I1009 18:29:42.741577   41166 logs.go:282] 0 containers: []
	W1009 18:29:42.741584   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:42.741590   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:42.741639   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:42.765961   41166 cri.go:89] found id: ""
	I1009 18:29:42.765974   41166 logs.go:282] 0 containers: []
	W1009 18:29:42.765980   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:42.765985   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:42.766027   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:42.792117   41166 cri.go:89] found id: ""
	I1009 18:29:42.792129   41166 logs.go:282] 0 containers: []
	W1009 18:29:42.792149   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:42.792155   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:42.792208   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:42.817726   41166 cri.go:89] found id: ""
	I1009 18:29:42.817738   41166 logs.go:282] 0 containers: []
	W1009 18:29:42.817745   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:42.817749   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:42.817799   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:42.842806   41166 cri.go:89] found id: ""
	I1009 18:29:42.842823   41166 logs.go:282] 0 containers: []
	W1009 18:29:42.842829   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:42.842836   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:42.842850   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:42.908734   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:42.908751   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:42.919767   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:42.919780   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:42.975159   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:42.968444    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.969012    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.970635    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.971181    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.972729    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:42.968444    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.969012    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.970635    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.971181    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.972729    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:42.975170   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:42.975181   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:43.041463   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:43.041480   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:45.571837   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:45.582376   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:45.582431   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:45.608198   41166 cri.go:89] found id: ""
	I1009 18:29:45.608211   41166 logs.go:282] 0 containers: []
	W1009 18:29:45.608217   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:45.608221   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:45.608286   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:45.635099   41166 cri.go:89] found id: ""
	I1009 18:29:45.635112   41166 logs.go:282] 0 containers: []
	W1009 18:29:45.635118   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:45.635126   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:45.635182   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:45.660701   41166 cri.go:89] found id: ""
	I1009 18:29:45.660714   41166 logs.go:282] 0 containers: []
	W1009 18:29:45.660720   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:45.660725   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:45.660765   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:45.686907   41166 cri.go:89] found id: ""
	I1009 18:29:45.686920   41166 logs.go:282] 0 containers: []
	W1009 18:29:45.686926   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:45.686931   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:45.686981   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:45.712880   41166 cri.go:89] found id: ""
	I1009 18:29:45.712893   41166 logs.go:282] 0 containers: []
	W1009 18:29:45.712899   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:45.712902   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:45.712941   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:45.738114   41166 cri.go:89] found id: ""
	I1009 18:29:45.738128   41166 logs.go:282] 0 containers: []
	W1009 18:29:45.738147   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:45.738155   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:45.738200   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:45.764157   41166 cri.go:89] found id: ""
	I1009 18:29:45.764172   41166 logs.go:282] 0 containers: []
	W1009 18:29:45.764178   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:45.764187   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:45.764196   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:45.793189   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:45.793204   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:45.861447   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:45.861463   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:45.872975   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:45.872988   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:45.928792   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:45.921633    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.922319    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.923962    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.924449    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.926072    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:45.921633    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.922319    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.923962    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.924449    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.926072    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:45.928810   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:45.928820   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:48.494959   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:48.505724   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:48.505766   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:48.531052   41166 cri.go:89] found id: ""
	I1009 18:29:48.531087   41166 logs.go:282] 0 containers: []
	W1009 18:29:48.531099   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:48.531103   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:48.531167   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:48.555479   41166 cri.go:89] found id: ""
	I1009 18:29:48.555492   41166 logs.go:282] 0 containers: []
	W1009 18:29:48.555498   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:48.555502   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:48.555543   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:48.581427   41166 cri.go:89] found id: ""
	I1009 18:29:48.581444   41166 logs.go:282] 0 containers: []
	W1009 18:29:48.581452   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:48.581460   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:48.581509   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:48.607162   41166 cri.go:89] found id: ""
	I1009 18:29:48.607176   41166 logs.go:282] 0 containers: []
	W1009 18:29:48.607182   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:48.607187   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:48.607235   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:48.632033   41166 cri.go:89] found id: ""
	I1009 18:29:48.632049   41166 logs.go:282] 0 containers: []
	W1009 18:29:48.632058   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:48.632064   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:48.632106   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:48.657205   41166 cri.go:89] found id: ""
	I1009 18:29:48.657218   41166 logs.go:282] 0 containers: []
	W1009 18:29:48.657224   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:48.657229   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:48.657280   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:48.681952   41166 cri.go:89] found id: ""
	I1009 18:29:48.681965   41166 logs.go:282] 0 containers: []
	W1009 18:29:48.681970   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:48.681976   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:48.681986   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:48.751441   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:48.751459   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:48.763252   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:48.763266   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:48.819401   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:48.812637    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.813245    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.814774    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.815273    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.816784    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:48.812637    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.813245    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.814774    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.815273    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.816784    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:48.819413   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:48.819426   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:48.882158   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:48.882176   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:51.412646   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:51.423570   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:51.423613   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:51.450043   41166 cri.go:89] found id: ""
	I1009 18:29:51.450058   41166 logs.go:282] 0 containers: []
	W1009 18:29:51.450076   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:51.450081   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:51.450130   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:51.474654   41166 cri.go:89] found id: ""
	I1009 18:29:51.474669   41166 logs.go:282] 0 containers: []
	W1009 18:29:51.474676   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:51.474683   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:51.474721   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:51.500060   41166 cri.go:89] found id: ""
	I1009 18:29:51.500074   41166 logs.go:282] 0 containers: []
	W1009 18:29:51.500079   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:51.500083   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:51.500125   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:51.525095   41166 cri.go:89] found id: ""
	I1009 18:29:51.525110   41166 logs.go:282] 0 containers: []
	W1009 18:29:51.525117   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:51.525128   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:51.525192   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:51.550903   41166 cri.go:89] found id: ""
	I1009 18:29:51.550915   41166 logs.go:282] 0 containers: []
	W1009 18:29:51.550921   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:51.550925   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:51.550963   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:51.576021   41166 cri.go:89] found id: ""
	I1009 18:29:51.576039   41166 logs.go:282] 0 containers: []
	W1009 18:29:51.576045   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:51.576050   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:51.576101   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:51.601302   41166 cri.go:89] found id: ""
	I1009 18:29:51.601331   41166 logs.go:282] 0 containers: []
	W1009 18:29:51.601337   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:51.601345   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:51.601357   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:51.673218   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:51.673234   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:51.684673   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:51.684688   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:51.740747   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:51.733129    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.733652    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.736069    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.736560    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.738067    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:51.733129    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.733652    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.736069    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.736560    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.738067    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:51.740756   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:51.740765   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:51.804392   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:51.804410   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:54.334647   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:54.345214   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:54.345259   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:54.371054   41166 cri.go:89] found id: ""
	I1009 18:29:54.371070   41166 logs.go:282] 0 containers: []
	W1009 18:29:54.371077   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:54.371081   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:54.371123   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:54.397390   41166 cri.go:89] found id: ""
	I1009 18:29:54.397406   41166 logs.go:282] 0 containers: []
	W1009 18:29:54.397414   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:54.397420   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:54.397469   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:54.423212   41166 cri.go:89] found id: ""
	I1009 18:29:54.423225   41166 logs.go:282] 0 containers: []
	W1009 18:29:54.423231   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:54.423235   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:54.423277   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:54.449723   41166 cri.go:89] found id: ""
	I1009 18:29:54.449738   41166 logs.go:282] 0 containers: []
	W1009 18:29:54.449747   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:54.449753   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:54.449794   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:54.476976   41166 cri.go:89] found id: ""
	I1009 18:29:54.476994   41166 logs.go:282] 0 containers: []
	W1009 18:29:54.476999   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:54.477004   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:54.477056   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:54.502387   41166 cri.go:89] found id: ""
	I1009 18:29:54.502409   41166 logs.go:282] 0 containers: []
	W1009 18:29:54.502419   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:54.502425   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:54.502471   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:54.528021   41166 cri.go:89] found id: ""
	I1009 18:29:54.528037   41166 logs.go:282] 0 containers: []
	W1009 18:29:54.528045   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:54.528053   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:54.528062   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:54.596551   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:54.596569   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:54.607908   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:54.607921   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:54.663274   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:54.655349    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.655928    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.658342    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.658895    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.660440    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:54.655349    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.655928    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.658342    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.658895    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.660440    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:54.663284   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:54.663296   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:54.724548   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:54.724565   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:57.253959   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:57.264749   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:57.264793   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:57.292216   41166 cri.go:89] found id: ""
	I1009 18:29:57.292234   41166 logs.go:282] 0 containers: []
	W1009 18:29:57.292244   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:57.292252   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:57.292322   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:57.320628   41166 cri.go:89] found id: ""
	I1009 18:29:57.320644   41166 logs.go:282] 0 containers: []
	W1009 18:29:57.320657   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:57.320663   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:57.320711   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:57.347524   41166 cri.go:89] found id: ""
	I1009 18:29:57.347541   41166 logs.go:282] 0 containers: []
	W1009 18:29:57.347549   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:57.347555   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:57.347599   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:57.374005   41166 cri.go:89] found id: ""
	I1009 18:29:57.374021   41166 logs.go:282] 0 containers: []
	W1009 18:29:57.374029   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:57.374034   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:57.374080   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:57.398685   41166 cri.go:89] found id: ""
	I1009 18:29:57.398700   41166 logs.go:282] 0 containers: []
	W1009 18:29:57.398706   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:57.398710   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:57.398758   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:57.424224   41166 cri.go:89] found id: ""
	I1009 18:29:57.424237   41166 logs.go:282] 0 containers: []
	W1009 18:29:57.424243   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:57.424247   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:57.424298   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:57.449118   41166 cri.go:89] found id: ""
	I1009 18:29:57.449144   41166 logs.go:282] 0 containers: []
	W1009 18:29:57.449153   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:57.449161   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:57.449170   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:57.477726   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:57.477741   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:57.549189   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:57.549206   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:57.560914   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:57.560933   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:57.615954   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:57.609197    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.609718    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.611273    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.611750    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.613311    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:57.609197    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.609718    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.611273    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.611750    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.613311    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:57.615970   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:57.615980   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:00.177763   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:00.188584   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:00.188628   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:00.214820   41166 cri.go:89] found id: ""
	I1009 18:30:00.214835   41166 logs.go:282] 0 containers: []
	W1009 18:30:00.214844   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:00.214851   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:00.214895   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:00.239376   41166 cri.go:89] found id: ""
	I1009 18:30:00.239393   41166 logs.go:282] 0 containers: []
	W1009 18:30:00.239401   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:00.239407   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:00.239447   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:00.265476   41166 cri.go:89] found id: ""
	I1009 18:30:00.265492   41166 logs.go:282] 0 containers: []
	W1009 18:30:00.265500   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:00.265506   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:00.265556   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:00.291131   41166 cri.go:89] found id: ""
	I1009 18:30:00.291158   41166 logs.go:282] 0 containers: []
	W1009 18:30:00.291167   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:00.291174   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:00.291226   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:00.316623   41166 cri.go:89] found id: ""
	I1009 18:30:00.316636   41166 logs.go:282] 0 containers: []
	W1009 18:30:00.316642   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:00.316646   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:00.316693   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:00.341462   41166 cri.go:89] found id: ""
	I1009 18:30:00.341476   41166 logs.go:282] 0 containers: []
	W1009 18:30:00.341485   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:00.341490   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:00.341531   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:00.366641   41166 cri.go:89] found id: ""
	I1009 18:30:00.366657   41166 logs.go:282] 0 containers: []
	W1009 18:30:00.366663   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:00.366670   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:00.366679   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:00.397505   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:00.397539   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:00.469540   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:00.469557   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:00.481466   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:00.481480   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:00.537449   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:00.530572    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.531116    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.532663    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.533175    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.534723    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:00.530572    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.531116    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.532663    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.533175    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.534723    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:00.537457   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:00.537466   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:03.107457   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:03.117969   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:03.118030   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:03.144661   41166 cri.go:89] found id: ""
	I1009 18:30:03.144676   41166 logs.go:282] 0 containers: []
	W1009 18:30:03.144684   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:03.144689   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:03.144731   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:03.169819   41166 cri.go:89] found id: ""
	I1009 18:30:03.169832   41166 logs.go:282] 0 containers: []
	W1009 18:30:03.169838   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:03.169842   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:03.169880   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:03.195252   41166 cri.go:89] found id: ""
	I1009 18:30:03.195264   41166 logs.go:282] 0 containers: []
	W1009 18:30:03.195271   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:03.195276   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:03.195319   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:03.221154   41166 cri.go:89] found id: ""
	I1009 18:30:03.221169   41166 logs.go:282] 0 containers: []
	W1009 18:30:03.221176   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:03.221181   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:03.221222   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:03.247656   41166 cri.go:89] found id: ""
	I1009 18:30:03.247670   41166 logs.go:282] 0 containers: []
	W1009 18:30:03.247676   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:03.247680   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:03.247736   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:03.273363   41166 cri.go:89] found id: ""
	I1009 18:30:03.273378   41166 logs.go:282] 0 containers: []
	W1009 18:30:03.273386   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:03.273391   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:03.273439   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:03.297383   41166 cri.go:89] found id: ""
	I1009 18:30:03.297399   41166 logs.go:282] 0 containers: []
	W1009 18:30:03.297407   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:03.297415   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:03.297426   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:03.327096   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:03.327110   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:03.396551   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:03.396569   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:03.408005   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:03.408020   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:03.462643   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:03.456283    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.456846    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.458452    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.458867    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.459996    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:03.456283    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.456846    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.458452    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.458867    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.459996    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:03.462656   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:03.462667   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:06.023381   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:06.034110   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:06.034175   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:06.059176   41166 cri.go:89] found id: ""
	I1009 18:30:06.059191   41166 logs.go:282] 0 containers: []
	W1009 18:30:06.059197   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:06.059201   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:06.059261   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:06.085110   41166 cri.go:89] found id: ""
	I1009 18:30:06.085126   41166 logs.go:282] 0 containers: []
	W1009 18:30:06.085146   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:06.085154   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:06.085211   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:06.110722   41166 cri.go:89] found id: ""
	I1009 18:30:06.110738   41166 logs.go:282] 0 containers: []
	W1009 18:30:06.110747   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:06.110753   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:06.110806   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:06.136728   41166 cri.go:89] found id: ""
	I1009 18:30:06.136744   41166 logs.go:282] 0 containers: []
	W1009 18:30:06.136752   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:06.136758   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:06.136815   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:06.162322   41166 cri.go:89] found id: ""
	I1009 18:30:06.162337   41166 logs.go:282] 0 containers: []
	W1009 18:30:06.162345   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:06.162351   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:06.162409   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:06.189203   41166 cri.go:89] found id: ""
	I1009 18:30:06.189217   41166 logs.go:282] 0 containers: []
	W1009 18:30:06.189225   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:06.189230   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:06.189374   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:06.215767   41166 cri.go:89] found id: ""
	I1009 18:30:06.215781   41166 logs.go:282] 0 containers: []
	W1009 18:30:06.215790   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:06.215798   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:06.215811   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:06.286131   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:06.286154   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:06.297884   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:06.297899   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:06.354614   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:06.347511    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.348070    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.349662    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.350175    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.351714    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:06.347511    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.348070    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.349662    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.350175    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.351714    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:06.354625   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:06.354634   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:06.421245   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:06.421263   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:08.950561   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:08.961412   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:08.961461   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:08.985056   41166 cri.go:89] found id: ""
	I1009 18:30:08.985073   41166 logs.go:282] 0 containers: []
	W1009 18:30:08.985081   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:08.985086   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:08.985155   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:09.010161   41166 cri.go:89] found id: ""
	I1009 18:30:09.010177   41166 logs.go:282] 0 containers: []
	W1009 18:30:09.010185   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:09.010190   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:09.010240   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:09.035006   41166 cri.go:89] found id: ""
	I1009 18:30:09.035021   41166 logs.go:282] 0 containers: []
	W1009 18:30:09.035030   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:09.035035   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:09.035079   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:09.059807   41166 cri.go:89] found id: ""
	I1009 18:30:09.059822   41166 logs.go:282] 0 containers: []
	W1009 18:30:09.059831   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:09.059836   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:09.059877   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:09.085467   41166 cri.go:89] found id: ""
	I1009 18:30:09.085482   41166 logs.go:282] 0 containers: []
	W1009 18:30:09.085490   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:09.085495   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:09.085536   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:09.110808   41166 cri.go:89] found id: ""
	I1009 18:30:09.110821   41166 logs.go:282] 0 containers: []
	W1009 18:30:09.110826   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:09.110831   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:09.110869   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:09.135842   41166 cri.go:89] found id: ""
	I1009 18:30:09.135854   41166 logs.go:282] 0 containers: []
	W1009 18:30:09.135860   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:09.135867   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:09.135875   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:09.195931   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:09.195948   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:09.225362   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:09.225375   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:09.296888   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:09.296905   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:09.309206   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:09.309223   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:09.365940   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:09.358751    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.359361    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.360926    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.361520    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.363120    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:09.358751    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.359361    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.360926    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.361520    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.363120    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:11.867608   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:11.878320   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:11.878362   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:11.904080   41166 cri.go:89] found id: ""
	I1009 18:30:11.904094   41166 logs.go:282] 0 containers: []
	W1009 18:30:11.904103   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:11.904109   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:11.904175   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:11.930291   41166 cri.go:89] found id: ""
	I1009 18:30:11.930308   41166 logs.go:282] 0 containers: []
	W1009 18:30:11.930327   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:11.930332   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:11.930372   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:11.955946   41166 cri.go:89] found id: ""
	I1009 18:30:11.955959   41166 logs.go:282] 0 containers: []
	W1009 18:30:11.955965   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:11.955970   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:11.956022   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:11.981169   41166 cri.go:89] found id: ""
	I1009 18:30:11.981184   41166 logs.go:282] 0 containers: []
	W1009 18:30:11.981190   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:11.981197   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:11.981254   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:12.006868   41166 cri.go:89] found id: ""
	I1009 18:30:12.006882   41166 logs.go:282] 0 containers: []
	W1009 18:30:12.006890   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:12.006896   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:12.006950   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:12.033045   41166 cri.go:89] found id: ""
	I1009 18:30:12.033062   41166 logs.go:282] 0 containers: []
	W1009 18:30:12.033070   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:12.033076   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:12.033123   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:12.059215   41166 cri.go:89] found id: ""
	I1009 18:30:12.059228   41166 logs.go:282] 0 containers: []
	W1009 18:30:12.059233   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:12.059240   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:12.059249   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:12.088610   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:12.088630   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:12.156730   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:12.156750   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:12.168340   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:12.168354   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:12.224955   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:12.217733    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.218350    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.220045    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.220517    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.222048    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:12.217733    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.218350    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.220045    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.220517    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.222048    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:12.224965   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:12.224974   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:14.790502   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:14.801228   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:14.801285   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:14.828449   41166 cri.go:89] found id: ""
	I1009 18:30:14.828469   41166 logs.go:282] 0 containers: []
	W1009 18:30:14.828478   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:14.828486   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:14.828539   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:14.854655   41166 cri.go:89] found id: ""
	I1009 18:30:14.854672   41166 logs.go:282] 0 containers: []
	W1009 18:30:14.854681   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:14.854687   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:14.854730   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:14.880081   41166 cri.go:89] found id: ""
	I1009 18:30:14.880103   41166 logs.go:282] 0 containers: []
	W1009 18:30:14.880110   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:14.880119   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:14.880182   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:14.906543   41166 cri.go:89] found id: ""
	I1009 18:30:14.906556   41166 logs.go:282] 0 containers: []
	W1009 18:30:14.906562   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:14.906567   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:14.906607   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:14.932338   41166 cri.go:89] found id: ""
	I1009 18:30:14.932354   41166 logs.go:282] 0 containers: []
	W1009 18:30:14.932360   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:14.932365   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:14.932417   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:14.959648   41166 cri.go:89] found id: ""
	I1009 18:30:14.959661   41166 logs.go:282] 0 containers: []
	W1009 18:30:14.959666   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:14.959670   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:14.959722   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:14.985626   41166 cri.go:89] found id: ""
	I1009 18:30:14.985642   41166 logs.go:282] 0 containers: []
	W1009 18:30:14.985651   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:14.985657   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:14.985667   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:15.059129   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:15.059150   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:15.070684   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:15.070698   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:15.127441   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:15.120544    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.121101    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.122649    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.123113    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.124615    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:15.120544    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.121101    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.122649    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.123113    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.124615    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:15.127451   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:15.127462   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:15.188736   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:15.188755   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:17.720548   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:17.731158   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:17.731199   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:17.756463   41166 cri.go:89] found id: ""
	I1009 18:30:17.756478   41166 logs.go:282] 0 containers: []
	W1009 18:30:17.756485   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:17.756489   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:17.756532   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:17.780776   41166 cri.go:89] found id: ""
	I1009 18:30:17.780792   41166 logs.go:282] 0 containers: []
	W1009 18:30:17.780799   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:17.780804   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:17.780845   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:17.805635   41166 cri.go:89] found id: ""
	I1009 18:30:17.805648   41166 logs.go:282] 0 containers: []
	W1009 18:30:17.805654   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:17.805658   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:17.805700   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:17.832060   41166 cri.go:89] found id: ""
	I1009 18:30:17.832074   41166 logs.go:282] 0 containers: []
	W1009 18:30:17.832079   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:17.832084   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:17.832125   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:17.859215   41166 cri.go:89] found id: ""
	I1009 18:30:17.859231   41166 logs.go:282] 0 containers: []
	W1009 18:30:17.859240   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:17.859248   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:17.859299   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:17.884007   41166 cri.go:89] found id: ""
	I1009 18:30:17.884021   41166 logs.go:282] 0 containers: []
	W1009 18:30:17.884027   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:17.884031   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:17.884073   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:17.908524   41166 cri.go:89] found id: ""
	I1009 18:30:17.908537   41166 logs.go:282] 0 containers: []
	W1009 18:30:17.908543   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:17.908550   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:17.908559   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:17.974071   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:17.974088   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:17.985794   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:17.985809   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:18.042658   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:18.035698    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.036247    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.037804    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.038378    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.039940    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:18.035698    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.036247    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.037804    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.038378    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.039940    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:18.042678   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:18.042688   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:18.104183   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:18.104201   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:20.634002   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:20.645000   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:20.645074   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:20.671295   41166 cri.go:89] found id: ""
	I1009 18:30:20.671309   41166 logs.go:282] 0 containers: []
	W1009 18:30:20.671320   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:20.671325   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:20.671370   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:20.699380   41166 cri.go:89] found id: ""
	I1009 18:30:20.699393   41166 logs.go:282] 0 containers: []
	W1009 18:30:20.699399   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:20.699404   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:20.699508   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:20.728459   41166 cri.go:89] found id: ""
	I1009 18:30:20.728483   41166 logs.go:282] 0 containers: []
	W1009 18:30:20.728490   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:20.728502   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:20.728571   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:20.755606   41166 cri.go:89] found id: ""
	I1009 18:30:20.755626   41166 logs.go:282] 0 containers: []
	W1009 18:30:20.755637   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:20.755643   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:20.755704   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:20.783272   41166 cri.go:89] found id: ""
	I1009 18:30:20.783285   41166 logs.go:282] 0 containers: []
	W1009 18:30:20.783291   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:20.783295   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:20.783338   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:20.810985   41166 cri.go:89] found id: ""
	I1009 18:30:20.810998   41166 logs.go:282] 0 containers: []
	W1009 18:30:20.811005   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:20.811009   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:20.811090   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:20.838557   41166 cri.go:89] found id: ""
	I1009 18:30:20.838573   41166 logs.go:282] 0 containers: []
	W1009 18:30:20.838580   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:20.838588   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:20.838597   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:20.868656   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:20.868669   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:20.940019   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:20.940041   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:20.952293   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:20.952307   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:21.010202   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:21.003172    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.003783    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.005520    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.006014    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.007633    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:21.003172    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.003783    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.005520    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.006014    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.007633    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:21.010215   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:21.010228   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:23.575003   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:23.585670   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:23.585721   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:23.611187   41166 cri.go:89] found id: ""
	I1009 18:30:23.611202   41166 logs.go:282] 0 containers: []
	W1009 18:30:23.611208   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:23.611216   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:23.611267   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:23.636952   41166 cri.go:89] found id: ""
	I1009 18:30:23.636966   41166 logs.go:282] 0 containers: []
	W1009 18:30:23.636972   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:23.636977   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:23.637018   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:23.661266   41166 cri.go:89] found id: ""
	I1009 18:30:23.661282   41166 logs.go:282] 0 containers: []
	W1009 18:30:23.661289   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:23.661294   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:23.661343   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:23.687560   41166 cri.go:89] found id: ""
	I1009 18:30:23.687573   41166 logs.go:282] 0 containers: []
	W1009 18:30:23.687578   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:23.687583   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:23.687637   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:23.712015   41166 cri.go:89] found id: ""
	I1009 18:30:23.712031   41166 logs.go:282] 0 containers: []
	W1009 18:30:23.712040   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:23.712046   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:23.712103   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:23.738106   41166 cri.go:89] found id: ""
	I1009 18:30:23.738120   41166 logs.go:282] 0 containers: []
	W1009 18:30:23.738126   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:23.738130   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:23.738191   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:23.764275   41166 cri.go:89] found id: ""
	I1009 18:30:23.764288   41166 logs.go:282] 0 containers: []
	W1009 18:30:23.764307   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:23.764314   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:23.764322   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:23.775354   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:23.775367   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:23.831862   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:23.824872    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.825499    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.827105    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.827605    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.829326    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:23.824872    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.825499    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.827105    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.827605    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.829326    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:23.831884   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:23.831893   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:23.894598   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:23.894614   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:23.922715   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:23.922731   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:26.494758   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:26.505984   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:26.506076   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:26.532013   41166 cri.go:89] found id: ""
	I1009 18:30:26.532029   41166 logs.go:282] 0 containers: []
	W1009 18:30:26.532037   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:26.532042   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:26.532088   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:26.558247   41166 cri.go:89] found id: ""
	I1009 18:30:26.558278   41166 logs.go:282] 0 containers: []
	W1009 18:30:26.558286   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:26.558290   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:26.558335   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:26.583466   41166 cri.go:89] found id: ""
	I1009 18:30:26.583479   41166 logs.go:282] 0 containers: []
	W1009 18:30:26.583485   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:26.583495   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:26.583536   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:26.611101   41166 cri.go:89] found id: ""
	I1009 18:30:26.611114   41166 logs.go:282] 0 containers: []
	W1009 18:30:26.611126   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:26.611131   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:26.611199   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:26.636533   41166 cri.go:89] found id: ""
	I1009 18:30:26.636547   41166 logs.go:282] 0 containers: []
	W1009 18:30:26.636553   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:26.636557   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:26.636594   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:26.661023   41166 cri.go:89] found id: ""
	I1009 18:30:26.661039   41166 logs.go:282] 0 containers: []
	W1009 18:30:26.661048   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:26.661055   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:26.661103   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:26.686499   41166 cri.go:89] found id: ""
	I1009 18:30:26.686511   41166 logs.go:282] 0 containers: []
	W1009 18:30:26.686518   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:26.686524   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:26.686533   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:26.750968   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:26.750986   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:26.762679   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:26.762697   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:26.819065   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:26.812332    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.812909    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.814580    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.815057    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.816557    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:26.812332    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.812909    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.814580    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.815057    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.816557    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:26.819088   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:26.819097   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:26.882784   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:26.882801   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:29.411957   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:29.422542   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:29.422590   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:29.448891   41166 cri.go:89] found id: ""
	I1009 18:30:29.448907   41166 logs.go:282] 0 containers: []
	W1009 18:30:29.448916   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:29.448921   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:29.448968   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:29.474806   41166 cri.go:89] found id: ""
	I1009 18:30:29.474823   41166 logs.go:282] 0 containers: []
	W1009 18:30:29.474829   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:29.474834   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:29.474875   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:29.501280   41166 cri.go:89] found id: ""
	I1009 18:30:29.501293   41166 logs.go:282] 0 containers: []
	W1009 18:30:29.501299   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:29.501303   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:29.501344   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:29.528191   41166 cri.go:89] found id: ""
	I1009 18:30:29.528204   41166 logs.go:282] 0 containers: []
	W1009 18:30:29.528210   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:29.528214   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:29.528253   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:29.554786   41166 cri.go:89] found id: ""
	I1009 18:30:29.554799   41166 logs.go:282] 0 containers: []
	W1009 18:30:29.554806   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:29.554811   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:29.554853   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:29.579893   41166 cri.go:89] found id: ""
	I1009 18:30:29.579909   41166 logs.go:282] 0 containers: []
	W1009 18:30:29.579918   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:29.579922   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:29.579965   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:29.605961   41166 cri.go:89] found id: ""
	I1009 18:30:29.605974   41166 logs.go:282] 0 containers: []
	W1009 18:30:29.605983   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:29.605998   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:29.606010   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:29.667811   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:29.667839   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:29.697600   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:29.697622   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:29.767295   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:29.767316   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:29.779348   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:29.779365   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:29.835961   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:29.829223    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.829767    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.831335    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.831758    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.833341    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:29.829223    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.829767    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.831335    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.831758    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.833341    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:32.337665   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:32.348466   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:32.348524   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:32.374886   41166 cri.go:89] found id: ""
	I1009 18:30:32.374904   41166 logs.go:282] 0 containers: []
	W1009 18:30:32.374914   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:32.374922   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:32.374970   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:32.400529   41166 cri.go:89] found id: ""
	I1009 18:30:32.400545   41166 logs.go:282] 0 containers: []
	W1009 18:30:32.400554   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:32.400560   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:32.400613   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:32.426791   41166 cri.go:89] found id: ""
	I1009 18:30:32.426807   41166 logs.go:282] 0 containers: []
	W1009 18:30:32.426812   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:32.426817   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:32.426857   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:32.452312   41166 cri.go:89] found id: ""
	I1009 18:30:32.452327   41166 logs.go:282] 0 containers: []
	W1009 18:30:32.452332   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:32.452337   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:32.452418   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:32.477378   41166 cri.go:89] found id: ""
	I1009 18:30:32.477392   41166 logs.go:282] 0 containers: []
	W1009 18:30:32.477398   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:32.477402   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:32.477445   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:32.503118   41166 cri.go:89] found id: ""
	I1009 18:30:32.503131   41166 logs.go:282] 0 containers: []
	W1009 18:30:32.503154   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:32.503161   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:32.503204   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:32.528118   41166 cri.go:89] found id: ""
	I1009 18:30:32.528132   41166 logs.go:282] 0 containers: []
	W1009 18:30:32.528156   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:32.528165   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:32.528175   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:32.591877   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:32.591893   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:32.603816   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:32.603831   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:32.660681   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:32.653480    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.654399    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.655963    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.656383    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.657937    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:32.653480    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.654399    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.655963    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.656383    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.657937    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:32.660698   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:32.660707   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:32.720544   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:32.720563   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:35.252168   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:35.262910   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:35.262957   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:35.288174   41166 cri.go:89] found id: ""
	I1009 18:30:35.288191   41166 logs.go:282] 0 containers: []
	W1009 18:30:35.288199   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:35.288205   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:35.288262   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:35.313498   41166 cri.go:89] found id: ""
	I1009 18:30:35.313515   41166 logs.go:282] 0 containers: []
	W1009 18:30:35.313523   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:35.313529   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:35.313576   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:35.337926   41166 cri.go:89] found id: ""
	I1009 18:30:35.337942   41166 logs.go:282] 0 containers: []
	W1009 18:30:35.337950   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:35.337956   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:35.337998   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:35.364071   41166 cri.go:89] found id: ""
	I1009 18:30:35.364085   41166 logs.go:282] 0 containers: []
	W1009 18:30:35.364093   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:35.364100   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:35.364185   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:35.390353   41166 cri.go:89] found id: ""
	I1009 18:30:35.390367   41166 logs.go:282] 0 containers: []
	W1009 18:30:35.390373   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:35.390378   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:35.390419   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:35.416164   41166 cri.go:89] found id: ""
	I1009 18:30:35.416179   41166 logs.go:282] 0 containers: []
	W1009 18:30:35.416185   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:35.416190   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:35.416230   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:35.442115   41166 cri.go:89] found id: ""
	I1009 18:30:35.442131   41166 logs.go:282] 0 containers: []
	W1009 18:30:35.442152   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:35.442161   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:35.442172   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:35.512407   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:35.512424   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:35.524233   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:35.524246   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:35.581940   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:35.574890    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.575447    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.577004    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.577533    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.579108    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:35.574890    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.575447    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.577004    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.577533    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.579108    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:35.581954   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:35.581963   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:35.645796   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:35.645815   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:38.176188   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:38.187286   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:38.187337   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:38.213431   41166 cri.go:89] found id: ""
	I1009 18:30:38.213447   41166 logs.go:282] 0 containers: []
	W1009 18:30:38.213454   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:38.213458   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:38.213506   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:38.239289   41166 cri.go:89] found id: ""
	I1009 18:30:38.239305   41166 logs.go:282] 0 containers: []
	W1009 18:30:38.239313   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:38.239322   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:38.239375   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:38.266575   41166 cri.go:89] found id: ""
	I1009 18:30:38.266590   41166 logs.go:282] 0 containers: []
	W1009 18:30:38.266599   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:38.266604   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:38.266659   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:38.293047   41166 cri.go:89] found id: ""
	I1009 18:30:38.293062   41166 logs.go:282] 0 containers: []
	W1009 18:30:38.293071   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:38.293077   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:38.293132   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:38.321467   41166 cri.go:89] found id: ""
	I1009 18:30:38.321483   41166 logs.go:282] 0 containers: []
	W1009 18:30:38.321497   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:38.321503   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:38.321550   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:38.348227   41166 cri.go:89] found id: ""
	I1009 18:30:38.348251   41166 logs.go:282] 0 containers: []
	W1009 18:30:38.348259   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:38.348263   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:38.348306   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:38.374014   41166 cri.go:89] found id: ""
	I1009 18:30:38.374027   41166 logs.go:282] 0 containers: []
	W1009 18:30:38.374033   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:38.374039   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:38.374049   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:38.402788   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:38.402802   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:38.467775   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:38.467793   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:38.479120   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:38.479133   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:38.534788   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:38.527716   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.528266   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.529835   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.530310   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.531921   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:38.527716   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.528266   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.529835   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.530310   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.531921   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:38.534798   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:38.534808   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:41.097400   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:41.108281   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:41.108326   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:41.134432   41166 cri.go:89] found id: ""
	I1009 18:30:41.134448   41166 logs.go:282] 0 containers: []
	W1009 18:30:41.134456   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:41.134461   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:41.134502   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:41.160000   41166 cri.go:89] found id: ""
	I1009 18:30:41.160045   41166 logs.go:282] 0 containers: []
	W1009 18:30:41.160055   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:41.160071   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:41.160116   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:41.185957   41166 cri.go:89] found id: ""
	I1009 18:30:41.185971   41166 logs.go:282] 0 containers: []
	W1009 18:30:41.185979   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:41.185985   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:41.186046   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:41.212581   41166 cri.go:89] found id: ""
	I1009 18:30:41.212595   41166 logs.go:282] 0 containers: []
	W1009 18:30:41.212604   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:41.212611   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:41.212664   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:41.239537   41166 cri.go:89] found id: ""
	I1009 18:30:41.239550   41166 logs.go:282] 0 containers: []
	W1009 18:30:41.239556   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:41.239560   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:41.239603   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:41.264876   41166 cri.go:89] found id: ""
	I1009 18:30:41.264891   41166 logs.go:282] 0 containers: []
	W1009 18:30:41.264906   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:41.264915   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:41.264961   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:41.293949   41166 cri.go:89] found id: ""
	I1009 18:30:41.293962   41166 logs.go:282] 0 containers: []
	W1009 18:30:41.293968   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:41.293975   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:41.293985   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:41.306008   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:41.306023   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:41.363715   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:41.356554   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.357179   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.358764   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.359246   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.361018   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:41.356554   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.357179   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.358764   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.359246   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.361018   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:41.363727   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:41.363736   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:41.427974   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:41.427993   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:41.457063   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:41.457080   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:44.027395   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:44.038545   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:44.038600   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:44.065345   41166 cri.go:89] found id: ""
	I1009 18:30:44.065358   41166 logs.go:282] 0 containers: []
	W1009 18:30:44.065364   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:44.065369   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:44.065418   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:44.092543   41166 cri.go:89] found id: ""
	I1009 18:30:44.092558   41166 logs.go:282] 0 containers: []
	W1009 18:30:44.092572   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:44.092578   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:44.092628   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:44.117582   41166 cri.go:89] found id: ""
	I1009 18:30:44.117598   41166 logs.go:282] 0 containers: []
	W1009 18:30:44.117606   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:44.117612   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:44.117663   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:44.144537   41166 cri.go:89] found id: ""
	I1009 18:30:44.144554   41166 logs.go:282] 0 containers: []
	W1009 18:30:44.144563   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:44.144569   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:44.144630   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:44.170004   41166 cri.go:89] found id: ""
	I1009 18:30:44.170020   41166 logs.go:282] 0 containers: []
	W1009 18:30:44.170027   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:44.170032   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:44.170085   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:44.195566   41166 cri.go:89] found id: ""
	I1009 18:30:44.195581   41166 logs.go:282] 0 containers: []
	W1009 18:30:44.195587   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:44.195591   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:44.195638   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:44.221237   41166 cri.go:89] found id: ""
	I1009 18:30:44.221250   41166 logs.go:282] 0 containers: []
	W1009 18:30:44.221256   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:44.221264   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:44.221273   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:44.290040   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:44.290059   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:44.301528   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:44.301543   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:44.356883   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:44.350018   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.350577   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.352116   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.352527   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.353985   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:44.350018   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.350577   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.352116   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.352527   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.353985   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:44.356892   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:44.356904   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:44.421203   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:44.421220   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:46.952072   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:46.962761   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:46.962852   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:46.988381   41166 cri.go:89] found id: ""
	I1009 18:30:46.988395   41166 logs.go:282] 0 containers: []
	W1009 18:30:46.988401   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:46.988406   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:46.988447   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:47.014123   41166 cri.go:89] found id: ""
	I1009 18:30:47.014151   41166 logs.go:282] 0 containers: []
	W1009 18:30:47.014161   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:47.014167   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:47.014223   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:47.040379   41166 cri.go:89] found id: ""
	I1009 18:30:47.040395   41166 logs.go:282] 0 containers: []
	W1009 18:30:47.040403   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:47.040409   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:47.040460   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:47.066430   41166 cri.go:89] found id: ""
	I1009 18:30:47.066444   41166 logs.go:282] 0 containers: []
	W1009 18:30:47.066450   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:47.066454   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:47.066495   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:47.092458   41166 cri.go:89] found id: ""
	I1009 18:30:47.092471   41166 logs.go:282] 0 containers: []
	W1009 18:30:47.092476   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:47.092481   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:47.092522   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:47.118558   41166 cri.go:89] found id: ""
	I1009 18:30:47.118574   41166 logs.go:282] 0 containers: []
	W1009 18:30:47.118582   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:47.118588   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:47.118639   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:47.143956   41166 cri.go:89] found id: ""
	I1009 18:30:47.143969   41166 logs.go:282] 0 containers: []
	W1009 18:30:47.143975   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:47.143983   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:47.143991   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:47.204921   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:47.204939   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:47.233955   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:47.233972   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:47.299659   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:47.299725   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:47.310930   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:47.310944   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:47.365782   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:47.358862   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.359473   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.361059   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.361558   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.363067   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:47.358862   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.359473   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.361059   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.361558   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.363067   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:49.866821   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:49.877492   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:49.877546   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:49.902235   41166 cri.go:89] found id: ""
	I1009 18:30:49.902249   41166 logs.go:282] 0 containers: []
	W1009 18:30:49.902255   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:49.902260   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:49.902330   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:49.927833   41166 cri.go:89] found id: ""
	I1009 18:30:49.927848   41166 logs.go:282] 0 containers: []
	W1009 18:30:49.927855   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:49.927859   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:49.927914   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:49.952484   41166 cri.go:89] found id: ""
	I1009 18:30:49.952500   41166 logs.go:282] 0 containers: []
	W1009 18:30:49.952515   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:49.952525   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:49.952653   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:49.978974   41166 cri.go:89] found id: ""
	I1009 18:30:49.978989   41166 logs.go:282] 0 containers: []
	W1009 18:30:49.978997   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:49.979003   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:49.979055   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:50.003996   41166 cri.go:89] found id: ""
	I1009 18:30:50.004011   41166 logs.go:282] 0 containers: []
	W1009 18:30:50.004020   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:50.004026   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:50.004074   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:50.029201   41166 cri.go:89] found id: ""
	I1009 18:30:50.029213   41166 logs.go:282] 0 containers: []
	W1009 18:30:50.029220   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:50.029225   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:50.029285   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:50.055190   41166 cri.go:89] found id: ""
	I1009 18:30:50.055203   41166 logs.go:282] 0 containers: []
	W1009 18:30:50.055208   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:50.055215   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:50.055224   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:50.124075   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:50.124092   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:50.135918   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:50.135933   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:50.192425   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:50.185538   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.186038   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.187643   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.188060   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.189680   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:50.185538   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.186038   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.187643   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.188060   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.189680   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:50.192437   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:50.192450   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:50.252346   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:50.252364   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:52.781770   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:52.792376   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:52.792418   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:52.818902   41166 cri.go:89] found id: ""
	I1009 18:30:52.818916   41166 logs.go:282] 0 containers: []
	W1009 18:30:52.818922   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:52.818941   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:52.818984   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:52.844120   41166 cri.go:89] found id: ""
	I1009 18:30:52.844145   41166 logs.go:282] 0 containers: []
	W1009 18:30:52.844154   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:52.844160   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:52.844205   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:52.870228   41166 cri.go:89] found id: ""
	I1009 18:30:52.870242   41166 logs.go:282] 0 containers: []
	W1009 18:30:52.870254   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:52.870259   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:52.870305   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:52.896056   41166 cri.go:89] found id: ""
	I1009 18:30:52.896073   41166 logs.go:282] 0 containers: []
	W1009 18:30:52.896082   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:52.896089   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:52.896151   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:52.921111   41166 cri.go:89] found id: ""
	I1009 18:30:52.921126   41166 logs.go:282] 0 containers: []
	W1009 18:30:52.921145   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:52.921152   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:52.921198   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:52.947164   41166 cri.go:89] found id: ""
	I1009 18:30:52.947180   41166 logs.go:282] 0 containers: []
	W1009 18:30:52.947189   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:52.947194   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:52.947246   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:52.972398   41166 cri.go:89] found id: ""
	I1009 18:30:52.972412   41166 logs.go:282] 0 containers: []
	W1009 18:30:52.972419   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:52.972426   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:52.972441   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:53.041501   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:53.041519   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:53.053308   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:53.053324   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:53.109333   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:53.102407   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.102951   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.104551   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.104933   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.106568   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:53.102407   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.102951   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.104551   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.104933   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.106568   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:53.109342   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:53.109351   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:53.168700   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:53.168718   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:55.699434   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:55.709814   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:55.709854   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:55.734822   41166 cri.go:89] found id: ""
	I1009 18:30:55.734841   41166 logs.go:282] 0 containers: []
	W1009 18:30:55.734851   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:55.734858   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:55.734916   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:55.759667   41166 cri.go:89] found id: ""
	I1009 18:30:55.759684   41166 logs.go:282] 0 containers: []
	W1009 18:30:55.759692   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:55.759698   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:55.759750   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:55.785789   41166 cri.go:89] found id: ""
	I1009 18:30:55.785805   41166 logs.go:282] 0 containers: []
	W1009 18:30:55.785813   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:55.785819   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:55.785872   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:55.810465   41166 cri.go:89] found id: ""
	I1009 18:30:55.810481   41166 logs.go:282] 0 containers: []
	W1009 18:30:55.810490   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:55.810496   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:55.810537   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:55.836067   41166 cri.go:89] found id: ""
	I1009 18:30:55.836080   41166 logs.go:282] 0 containers: []
	W1009 18:30:55.836086   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:55.836091   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:55.836131   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:55.860951   41166 cri.go:89] found id: ""
	I1009 18:30:55.860967   41166 logs.go:282] 0 containers: []
	W1009 18:30:55.860974   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:55.860978   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:55.861021   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:55.885761   41166 cri.go:89] found id: ""
	I1009 18:30:55.885775   41166 logs.go:282] 0 containers: []
	W1009 18:30:55.885781   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:55.885788   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:55.885797   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:55.915265   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:55.915280   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:55.981115   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:55.981146   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:55.993311   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:55.993328   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:56.050751   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:56.043889   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.044374   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.045969   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.046413   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.047907   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:56.043889   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.044374   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.045969   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.046413   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.047907   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:56.050764   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:56.050774   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:58.612432   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:58.623245   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:58.623295   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:58.648116   41166 cri.go:89] found id: ""
	I1009 18:30:58.648129   41166 logs.go:282] 0 containers: []
	W1009 18:30:58.648149   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:58.648156   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:58.648209   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:58.674600   41166 cri.go:89] found id: ""
	I1009 18:30:58.674619   41166 logs.go:282] 0 containers: []
	W1009 18:30:58.674627   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:58.674634   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:58.674700   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:58.700636   41166 cri.go:89] found id: ""
	I1009 18:30:58.700649   41166 logs.go:282] 0 containers: []
	W1009 18:30:58.700655   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:58.700659   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:58.700701   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:58.725891   41166 cri.go:89] found id: ""
	I1009 18:30:58.725907   41166 logs.go:282] 0 containers: []
	W1009 18:30:58.725916   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:58.725922   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:58.725984   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:58.751493   41166 cri.go:89] found id: ""
	I1009 18:30:58.751509   41166 logs.go:282] 0 containers: []
	W1009 18:30:58.751517   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:58.751523   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:58.751565   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:58.776578   41166 cri.go:89] found id: ""
	I1009 18:30:58.776594   41166 logs.go:282] 0 containers: []
	W1009 18:30:58.776603   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:58.776609   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:58.776668   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:58.802746   41166 cri.go:89] found id: ""
	I1009 18:30:58.802759   41166 logs.go:282] 0 containers: []
	W1009 18:30:58.802765   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:58.802772   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:58.802780   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:58.871392   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:58.871409   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:58.883200   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:58.883216   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:58.939993   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:58.932935   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.933540   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.935122   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.935618   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.937106   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:58.932935   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.933540   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.935122   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.935618   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.937106   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:58.940010   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:58.940026   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:59.001043   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:59.001062   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:01.533754   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:01.544314   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:01.544360   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:01.570557   41166 cri.go:89] found id: ""
	I1009 18:31:01.570573   41166 logs.go:282] 0 containers: []
	W1009 18:31:01.570581   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:01.570587   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:01.570633   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:01.597498   41166 cri.go:89] found id: ""
	I1009 18:31:01.597512   41166 logs.go:282] 0 containers: []
	W1009 18:31:01.597518   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:01.597522   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:01.597562   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:01.624834   41166 cri.go:89] found id: ""
	I1009 18:31:01.624850   41166 logs.go:282] 0 containers: []
	W1009 18:31:01.624859   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:01.624865   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:01.624928   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:01.650834   41166 cri.go:89] found id: ""
	I1009 18:31:01.650849   41166 logs.go:282] 0 containers: []
	W1009 18:31:01.650858   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:01.650864   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:01.650902   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:01.676498   41166 cri.go:89] found id: ""
	I1009 18:31:01.676513   41166 logs.go:282] 0 containers: []
	W1009 18:31:01.676522   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:01.676530   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:01.676575   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:01.702274   41166 cri.go:89] found id: ""
	I1009 18:31:01.702288   41166 logs.go:282] 0 containers: []
	W1009 18:31:01.702299   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:01.702304   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:01.702359   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:01.727077   41166 cri.go:89] found id: ""
	I1009 18:31:01.727089   41166 logs.go:282] 0 containers: []
	W1009 18:31:01.727095   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:01.727102   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:01.727110   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:01.794867   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:01.794884   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:01.807132   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:01.807156   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:01.863186   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:01.856581   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.857195   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.858743   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.859211   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.860783   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:01.856581   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.857195   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.858743   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.859211   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.860783   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:01.863194   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:01.863203   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:01.926319   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:01.926337   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:04.456429   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:04.467647   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:04.467697   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:04.494363   41166 cri.go:89] found id: ""
	I1009 18:31:04.494376   41166 logs.go:282] 0 containers: []
	W1009 18:31:04.494382   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:04.494386   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:04.494425   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:04.519597   41166 cri.go:89] found id: ""
	I1009 18:31:04.519613   41166 logs.go:282] 0 containers: []
	W1009 18:31:04.519622   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:04.519627   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:04.519673   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:04.544960   41166 cri.go:89] found id: ""
	I1009 18:31:04.544973   41166 logs.go:282] 0 containers: []
	W1009 18:31:04.544979   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:04.544983   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:04.545025   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:04.570312   41166 cri.go:89] found id: ""
	I1009 18:31:04.570326   41166 logs.go:282] 0 containers: []
	W1009 18:31:04.570331   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:04.570336   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:04.570376   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:04.598075   41166 cri.go:89] found id: ""
	I1009 18:31:04.598088   41166 logs.go:282] 0 containers: []
	W1009 18:31:04.598094   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:04.598098   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:04.598163   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:04.624439   41166 cri.go:89] found id: ""
	I1009 18:31:04.624452   41166 logs.go:282] 0 containers: []
	W1009 18:31:04.624458   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:04.624462   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:04.624501   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:04.650512   41166 cri.go:89] found id: ""
	I1009 18:31:04.650526   41166 logs.go:282] 0 containers: []
	W1009 18:31:04.650535   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:04.650542   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:04.650550   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:04.721753   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:04.721770   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:04.733512   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:04.733526   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:04.789859   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:04.782731   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.783273   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.784877   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.785331   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.786824   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:04.782731   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.783273   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.784877   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.785331   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.786824   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:04.789871   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:04.789881   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:04.853995   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:04.854014   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:07.383979   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:07.395090   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:07.395190   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:07.421890   41166 cri.go:89] found id: ""
	I1009 18:31:07.421903   41166 logs.go:282] 0 containers: []
	W1009 18:31:07.421909   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:07.421914   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:07.421966   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:07.448060   41166 cri.go:89] found id: ""
	I1009 18:31:07.448073   41166 logs.go:282] 0 containers: []
	W1009 18:31:07.448079   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:07.448083   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:07.448124   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:07.474470   41166 cri.go:89] found id: ""
	I1009 18:31:07.474482   41166 logs.go:282] 0 containers: []
	W1009 18:31:07.474488   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:07.474493   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:07.474536   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:07.501777   41166 cri.go:89] found id: ""
	I1009 18:31:07.501793   41166 logs.go:282] 0 containers: []
	W1009 18:31:07.501802   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:07.501808   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:07.501851   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:07.527522   41166 cri.go:89] found id: ""
	I1009 18:31:07.527534   41166 logs.go:282] 0 containers: []
	W1009 18:31:07.527540   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:07.527545   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:07.527597   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:07.552279   41166 cri.go:89] found id: ""
	I1009 18:31:07.552294   41166 logs.go:282] 0 containers: []
	W1009 18:31:07.552302   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:07.552307   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:07.552346   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:07.576431   41166 cri.go:89] found id: ""
	I1009 18:31:07.576446   41166 logs.go:282] 0 containers: []
	W1009 18:31:07.576454   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:07.576462   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:07.576470   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:07.643680   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:07.643696   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:07.655497   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:07.655511   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:07.710565   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:07.703625   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.704548   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.706134   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.706591   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.708100   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:07.703625   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.704548   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.706134   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.706591   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.708100   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:07.710581   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:07.710591   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:07.772201   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:07.772218   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:10.301414   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:10.312068   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:10.312119   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:10.336646   41166 cri.go:89] found id: ""
	I1009 18:31:10.336661   41166 logs.go:282] 0 containers: []
	W1009 18:31:10.336668   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:10.336672   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:10.336714   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:10.361765   41166 cri.go:89] found id: ""
	I1009 18:31:10.361779   41166 logs.go:282] 0 containers: []
	W1009 18:31:10.361788   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:10.361793   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:10.361849   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:10.386638   41166 cri.go:89] found id: ""
	I1009 18:31:10.386654   41166 logs.go:282] 0 containers: []
	W1009 18:31:10.386663   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:10.386669   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:10.386715   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:10.412340   41166 cri.go:89] found id: ""
	I1009 18:31:10.412353   41166 logs.go:282] 0 containers: []
	W1009 18:31:10.412359   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:10.412363   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:10.412402   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:10.437345   41166 cri.go:89] found id: ""
	I1009 18:31:10.437360   41166 logs.go:282] 0 containers: []
	W1009 18:31:10.437368   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:10.437372   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:10.437412   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:10.461775   41166 cri.go:89] found id: ""
	I1009 18:31:10.461790   41166 logs.go:282] 0 containers: []
	W1009 18:31:10.461797   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:10.461804   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:10.461851   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:10.486502   41166 cri.go:89] found id: ""
	I1009 18:31:10.486515   41166 logs.go:282] 0 containers: []
	W1009 18:31:10.486521   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:10.486528   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:10.486540   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:10.541525   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:10.534617   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.535191   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.536754   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.537206   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.538626   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:10.534617   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.535191   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.536754   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.537206   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.538626   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:10.541534   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:10.541543   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:10.605554   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:10.605573   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:10.633218   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:10.633233   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:10.698623   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:10.698640   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:13.212017   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:13.222887   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:13.222934   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:13.249527   41166 cri.go:89] found id: ""
	I1009 18:31:13.249545   41166 logs.go:282] 0 containers: []
	W1009 18:31:13.249553   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:13.249558   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:13.249613   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:13.276030   41166 cri.go:89] found id: ""
	I1009 18:31:13.276047   41166 logs.go:282] 0 containers: []
	W1009 18:31:13.276055   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:13.276062   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:13.276123   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:13.301696   41166 cri.go:89] found id: ""
	I1009 18:31:13.301712   41166 logs.go:282] 0 containers: []
	W1009 18:31:13.301722   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:13.301728   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:13.301779   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:13.327279   41166 cri.go:89] found id: ""
	I1009 18:31:13.327297   41166 logs.go:282] 0 containers: []
	W1009 18:31:13.327305   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:13.327314   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:13.327376   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:13.352370   41166 cri.go:89] found id: ""
	I1009 18:31:13.352387   41166 logs.go:282] 0 containers: []
	W1009 18:31:13.352396   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:13.352404   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:13.352455   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:13.376705   41166 cri.go:89] found id: ""
	I1009 18:31:13.376718   41166 logs.go:282] 0 containers: []
	W1009 18:31:13.376724   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:13.376728   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:13.376769   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:13.401874   41166 cri.go:89] found id: ""
	I1009 18:31:13.401887   41166 logs.go:282] 0 containers: []
	W1009 18:31:13.401893   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:13.401899   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:13.401908   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:13.468065   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:13.468083   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:13.479819   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:13.479833   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:13.536357   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:13.528543   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.529016   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.530652   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.532160   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.532602   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:13.528543   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.529016   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.530652   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.532160   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.532602   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:13.536371   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:13.536385   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:13.595534   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:13.595552   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:16.124813   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:16.135558   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:16.135630   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:16.161632   41166 cri.go:89] found id: ""
	I1009 18:31:16.161649   41166 logs.go:282] 0 containers: []
	W1009 18:31:16.161657   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:16.161662   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:16.161706   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:16.187466   41166 cri.go:89] found id: ""
	I1009 18:31:16.187480   41166 logs.go:282] 0 containers: []
	W1009 18:31:16.187486   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:16.187491   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:16.187532   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:16.214699   41166 cri.go:89] found id: ""
	I1009 18:31:16.214712   41166 logs.go:282] 0 containers: []
	W1009 18:31:16.214718   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:16.214722   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:16.214772   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:16.241600   41166 cri.go:89] found id: ""
	I1009 18:31:16.241617   41166 logs.go:282] 0 containers: []
	W1009 18:31:16.241622   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:16.241627   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:16.241670   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:16.266065   41166 cri.go:89] found id: ""
	I1009 18:31:16.266082   41166 logs.go:282] 0 containers: []
	W1009 18:31:16.266091   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:16.266097   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:16.266158   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:16.291053   41166 cri.go:89] found id: ""
	I1009 18:31:16.291067   41166 logs.go:282] 0 containers: []
	W1009 18:31:16.291073   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:16.291077   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:16.291123   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:16.316037   41166 cri.go:89] found id: ""
	I1009 18:31:16.316053   41166 logs.go:282] 0 containers: []
	W1009 18:31:16.316058   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:16.316065   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:16.316075   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:16.374518   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:16.374537   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:16.403805   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:16.403890   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:16.472344   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:16.472362   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:16.483905   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:16.483921   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:16.539056   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:16.532081   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.532735   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.534334   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.534743   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.536309   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:16.532081   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.532735   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.534334   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.534743   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.536309   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:19.039513   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:19.050212   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:19.050255   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:19.074802   41166 cri.go:89] found id: ""
	I1009 18:31:19.074819   41166 logs.go:282] 0 containers: []
	W1009 18:31:19.074828   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:19.074834   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:19.074879   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:19.101554   41166 cri.go:89] found id: ""
	I1009 18:31:19.101568   41166 logs.go:282] 0 containers: []
	W1009 18:31:19.101574   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:19.101579   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:19.101618   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:19.126592   41166 cri.go:89] found id: ""
	I1009 18:31:19.126604   41166 logs.go:282] 0 containers: []
	W1009 18:31:19.126610   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:19.126614   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:19.126652   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:19.151096   41166 cri.go:89] found id: ""
	I1009 18:31:19.151108   41166 logs.go:282] 0 containers: []
	W1009 18:31:19.151117   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:19.151124   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:19.151179   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:19.175712   41166 cri.go:89] found id: ""
	I1009 18:31:19.175730   41166 logs.go:282] 0 containers: []
	W1009 18:31:19.175736   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:19.175740   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:19.175781   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:19.200064   41166 cri.go:89] found id: ""
	I1009 18:31:19.200080   41166 logs.go:282] 0 containers: []
	W1009 18:31:19.200088   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:19.200094   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:19.200161   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:19.227391   41166 cri.go:89] found id: ""
	I1009 18:31:19.227406   41166 logs.go:282] 0 containers: []
	W1009 18:31:19.227414   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:19.227424   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:19.227434   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:19.289413   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:19.289430   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:19.318081   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:19.318095   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:19.387739   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:19.387754   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:19.399028   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:19.399046   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:19.454538   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:19.447438   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.447971   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.449548   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.449995   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.451532   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:19.447438   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.447971   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.449548   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.449995   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.451532   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:21.956227   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:21.966936   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:21.966995   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:21.991378   41166 cri.go:89] found id: ""
	I1009 18:31:21.991391   41166 logs.go:282] 0 containers: []
	W1009 18:31:21.991397   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:21.991402   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:21.991440   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:22.016783   41166 cri.go:89] found id: ""
	I1009 18:31:22.016796   41166 logs.go:282] 0 containers: []
	W1009 18:31:22.016803   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:22.016808   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:22.016848   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:22.041987   41166 cri.go:89] found id: ""
	I1009 18:31:22.042003   41166 logs.go:282] 0 containers: []
	W1009 18:31:22.042012   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:22.042018   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:22.042068   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:22.067709   41166 cri.go:89] found id: ""
	I1009 18:31:22.067722   41166 logs.go:282] 0 containers: []
	W1009 18:31:22.067727   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:22.067735   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:22.067787   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:22.093654   41166 cri.go:89] found id: ""
	I1009 18:31:22.093666   41166 logs.go:282] 0 containers: []
	W1009 18:31:22.093671   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:22.093675   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:22.093718   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:22.119263   41166 cri.go:89] found id: ""
	I1009 18:31:22.119276   41166 logs.go:282] 0 containers: []
	W1009 18:31:22.119306   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:22.119310   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:22.119350   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:22.143920   41166 cri.go:89] found id: ""
	I1009 18:31:22.143933   41166 logs.go:282] 0 containers: []
	W1009 18:31:22.143939   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:22.143945   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:22.143954   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:22.172713   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:22.172727   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:22.241689   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:22.241717   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:22.253927   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:22.253942   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:22.308454   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:22.301618   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.302105   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.303689   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.304160   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.305712   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:22.301618   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.302105   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.303689   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.304160   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.305712   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:22.308469   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:22.308483   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:24.874240   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:24.885199   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:24.885251   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:24.912332   41166 cri.go:89] found id: ""
	I1009 18:31:24.912355   41166 logs.go:282] 0 containers: []
	W1009 18:31:24.912363   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:24.912369   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:24.912510   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:24.938534   41166 cri.go:89] found id: ""
	I1009 18:31:24.938551   41166 logs.go:282] 0 containers: []
	W1009 18:31:24.938557   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:24.938564   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:24.938611   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:24.965113   41166 cri.go:89] found id: ""
	I1009 18:31:24.965125   41166 logs.go:282] 0 containers: []
	W1009 18:31:24.965131   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:24.965151   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:24.965204   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:24.991845   41166 cri.go:89] found id: ""
	I1009 18:31:24.991858   41166 logs.go:282] 0 containers: []
	W1009 18:31:24.991864   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:24.991868   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:24.991910   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:25.018693   41166 cri.go:89] found id: ""
	I1009 18:31:25.018706   41166 logs.go:282] 0 containers: []
	W1009 18:31:25.018711   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:25.018717   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:25.018756   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:25.044931   41166 cri.go:89] found id: ""
	I1009 18:31:25.044948   41166 logs.go:282] 0 containers: []
	W1009 18:31:25.044957   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:25.044963   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:25.045014   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:25.071449   41166 cri.go:89] found id: ""
	I1009 18:31:25.071465   41166 logs.go:282] 0 containers: []
	W1009 18:31:25.071474   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:25.071483   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:25.071495   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:25.138301   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:25.138320   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:25.150561   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:25.150575   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:25.208095   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:25.201000   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.201519   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.203190   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.203673   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.205213   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:25.201000   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.201519   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.203190   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.203673   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.205213   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:25.208105   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:25.208114   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:25.272810   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:25.272829   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:27.804229   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:27.815074   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:27.815120   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:27.840171   41166 cri.go:89] found id: ""
	I1009 18:31:27.840188   41166 logs.go:282] 0 containers: []
	W1009 18:31:27.840196   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:27.840200   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:27.840274   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:27.866963   41166 cri.go:89] found id: ""
	I1009 18:31:27.866981   41166 logs.go:282] 0 containers: []
	W1009 18:31:27.866990   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:27.866996   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:27.867076   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:27.893152   41166 cri.go:89] found id: ""
	I1009 18:31:27.893169   41166 logs.go:282] 0 containers: []
	W1009 18:31:27.893177   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:27.893183   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:27.893235   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:27.920337   41166 cri.go:89] found id: ""
	I1009 18:31:27.920350   41166 logs.go:282] 0 containers: []
	W1009 18:31:27.920356   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:27.920361   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:27.920403   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:27.945940   41166 cri.go:89] found id: ""
	I1009 18:31:27.945956   41166 logs.go:282] 0 containers: []
	W1009 18:31:27.945964   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:27.945971   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:27.946036   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:27.971578   41166 cri.go:89] found id: ""
	I1009 18:31:27.971594   41166 logs.go:282] 0 containers: []
	W1009 18:31:27.971600   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:27.971604   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:27.971651   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:27.998876   41166 cri.go:89] found id: ""
	I1009 18:31:27.998890   41166 logs.go:282] 0 containers: []
	W1009 18:31:27.998898   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:27.998907   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:27.998919   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:28.060031   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:28.060050   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:28.090280   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:28.090294   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:28.155986   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:28.156004   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:28.167898   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:28.167912   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:28.224480   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:28.217373   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.217904   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.219580   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.219973   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.221548   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:28.217373   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.217904   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.219580   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.219973   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.221548   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:30.726158   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:30.736658   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:30.736713   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:30.762096   41166 cri.go:89] found id: ""
	I1009 18:31:30.762111   41166 logs.go:282] 0 containers: []
	W1009 18:31:30.762119   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:30.762125   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:30.762193   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:30.787132   41166 cri.go:89] found id: ""
	I1009 18:31:30.787161   41166 logs.go:282] 0 containers: []
	W1009 18:31:30.787169   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:30.787175   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:30.787234   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:30.813496   41166 cri.go:89] found id: ""
	I1009 18:31:30.813510   41166 logs.go:282] 0 containers: []
	W1009 18:31:30.813515   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:30.813519   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:30.813558   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:30.838073   41166 cri.go:89] found id: ""
	I1009 18:31:30.838089   41166 logs.go:282] 0 containers: []
	W1009 18:31:30.838098   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:30.838104   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:30.838167   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:30.864286   41166 cri.go:89] found id: ""
	I1009 18:31:30.864301   41166 logs.go:282] 0 containers: []
	W1009 18:31:30.864307   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:30.864312   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:30.864353   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:30.890806   41166 cri.go:89] found id: ""
	I1009 18:31:30.890819   41166 logs.go:282] 0 containers: []
	W1009 18:31:30.890825   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:30.890830   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:30.890885   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:30.917461   41166 cri.go:89] found id: ""
	I1009 18:31:30.917474   41166 logs.go:282] 0 containers: []
	W1009 18:31:30.917480   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:30.917487   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:30.917496   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:30.947122   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:30.947157   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:31.013114   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:31.013130   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:31.025904   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:31.025924   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:31.081194   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:31.074116   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.074697   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.076284   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.076747   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.078298   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:31.074116   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.074697   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.076284   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.076747   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.078298   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:31.081206   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:31.081217   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:33.641553   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:33.652051   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:33.652105   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:33.676453   41166 cri.go:89] found id: ""
	I1009 18:31:33.676467   41166 logs.go:282] 0 containers: []
	W1009 18:31:33.676473   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:33.676477   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:33.676517   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:33.701838   41166 cri.go:89] found id: ""
	I1009 18:31:33.701854   41166 logs.go:282] 0 containers: []
	W1009 18:31:33.701862   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:33.701868   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:33.701916   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:33.727771   41166 cri.go:89] found id: ""
	I1009 18:31:33.727787   41166 logs.go:282] 0 containers: []
	W1009 18:31:33.727794   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:33.727799   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:33.727839   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:33.753654   41166 cri.go:89] found id: ""
	I1009 18:31:33.753670   41166 logs.go:282] 0 containers: []
	W1009 18:31:33.753681   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:33.753686   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:33.753731   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:33.780405   41166 cri.go:89] found id: ""
	I1009 18:31:33.780421   41166 logs.go:282] 0 containers: []
	W1009 18:31:33.780430   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:33.780436   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:33.780477   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:33.807435   41166 cri.go:89] found id: ""
	I1009 18:31:33.807448   41166 logs.go:282] 0 containers: []
	W1009 18:31:33.807454   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:33.807458   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:33.807502   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:33.833608   41166 cri.go:89] found id: ""
	I1009 18:31:33.833625   41166 logs.go:282] 0 containers: []
	W1009 18:31:33.833633   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:33.833642   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:33.833655   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:33.900086   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:33.900106   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:33.912409   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:33.912429   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:33.968532   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:33.961720   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.962278   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.963911   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.964427   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.965875   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:33.961720   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.962278   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.963911   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.964427   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.965875   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:33.968541   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:33.968551   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:34.031879   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:34.031899   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:36.563728   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:36.574356   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:36.574399   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:36.600194   41166 cri.go:89] found id: ""
	I1009 18:31:36.600209   41166 logs.go:282] 0 containers: []
	W1009 18:31:36.600217   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:36.600223   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:36.600284   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:36.626075   41166 cri.go:89] found id: ""
	I1009 18:31:36.626096   41166 logs.go:282] 0 containers: []
	W1009 18:31:36.626106   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:36.626111   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:36.626182   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:36.652078   41166 cri.go:89] found id: ""
	I1009 18:31:36.652098   41166 logs.go:282] 0 containers: []
	W1009 18:31:36.652104   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:36.652109   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:36.652170   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:36.677462   41166 cri.go:89] found id: ""
	I1009 18:31:36.677474   41166 logs.go:282] 0 containers: []
	W1009 18:31:36.677480   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:36.677484   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:36.677522   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:36.703778   41166 cri.go:89] found id: ""
	I1009 18:31:36.703793   41166 logs.go:282] 0 containers: []
	W1009 18:31:36.703801   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:36.703807   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:36.703856   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:36.729868   41166 cri.go:89] found id: ""
	I1009 18:31:36.729884   41166 logs.go:282] 0 containers: []
	W1009 18:31:36.729893   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:36.729899   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:36.729942   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:36.756775   41166 cri.go:89] found id: ""
	I1009 18:31:36.756787   41166 logs.go:282] 0 containers: []
	W1009 18:31:36.756793   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:36.756801   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:36.756810   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:36.826838   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:36.826854   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:36.838705   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:36.838718   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:36.894816   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:36.887889   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.888440   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.890010   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.890538   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.891994   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:36.887889   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.888440   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.890010   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.890538   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.891994   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:36.894826   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:36.894838   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:36.959865   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:36.959882   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:39.490368   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:39.501284   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:39.501335   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:39.527003   41166 cri.go:89] found id: ""
	I1009 18:31:39.527016   41166 logs.go:282] 0 containers: []
	W1009 18:31:39.527022   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:39.527026   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:39.527071   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:39.553355   41166 cri.go:89] found id: ""
	I1009 18:31:39.553370   41166 logs.go:282] 0 containers: []
	W1009 18:31:39.553379   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:39.553384   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:39.553425   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:39.579105   41166 cri.go:89] found id: ""
	I1009 18:31:39.579121   41166 logs.go:282] 0 containers: []
	W1009 18:31:39.579128   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:39.579133   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:39.579203   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:39.604899   41166 cri.go:89] found id: ""
	I1009 18:31:39.604913   41166 logs.go:282] 0 containers: []
	W1009 18:31:39.604919   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:39.604928   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:39.604985   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:39.630635   41166 cri.go:89] found id: ""
	I1009 18:31:39.630647   41166 logs.go:282] 0 containers: []
	W1009 18:31:39.630653   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:39.630657   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:39.630701   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:39.656106   41166 cri.go:89] found id: ""
	I1009 18:31:39.656121   41166 logs.go:282] 0 containers: []
	W1009 18:31:39.656129   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:39.656148   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:39.656207   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:39.681655   41166 cri.go:89] found id: ""
	I1009 18:31:39.681667   41166 logs.go:282] 0 containers: []
	W1009 18:31:39.681673   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:39.681680   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:39.681688   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:39.744126   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:39.744152   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:39.772799   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:39.772812   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:39.844571   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:39.844590   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:39.856246   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:39.856263   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:39.911854   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:39.905117   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.905586   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.907188   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.907677   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.909231   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:39.905117   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.905586   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.907188   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.907677   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.909231   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:42.413528   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:42.424343   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:42.424407   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:42.450128   41166 cri.go:89] found id: ""
	I1009 18:31:42.450165   41166 logs.go:282] 0 containers: []
	W1009 18:31:42.450173   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:42.450180   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:42.450239   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:42.475946   41166 cri.go:89] found id: ""
	I1009 18:31:42.475961   41166 logs.go:282] 0 containers: []
	W1009 18:31:42.475970   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:42.475976   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:42.476031   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:42.502865   41166 cri.go:89] found id: ""
	I1009 18:31:42.502881   41166 logs.go:282] 0 containers: []
	W1009 18:31:42.502890   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:42.502896   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:42.502946   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:42.530798   41166 cri.go:89] found id: ""
	I1009 18:31:42.530814   41166 logs.go:282] 0 containers: []
	W1009 18:31:42.530823   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:42.530829   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:42.530879   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:42.556524   41166 cri.go:89] found id: ""
	I1009 18:31:42.556539   41166 logs.go:282] 0 containers: []
	W1009 18:31:42.556548   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:42.556554   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:42.556605   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:42.582936   41166 cri.go:89] found id: ""
	I1009 18:31:42.582953   41166 logs.go:282] 0 containers: []
	W1009 18:31:42.582961   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:42.582967   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:42.583055   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:42.609400   41166 cri.go:89] found id: ""
	I1009 18:31:42.609415   41166 logs.go:282] 0 containers: []
	W1009 18:31:42.609424   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:42.609433   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:42.609444   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:42.671451   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:42.671468   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:42.700813   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:42.700832   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:42.769841   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:42.769859   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:42.782244   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:42.782261   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:42.840011   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:42.832755   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.833376   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.834917   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.835376   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.836976   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:42.832755   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.833376   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.834917   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.835376   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.836976   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:45.340705   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:45.350991   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:45.351034   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:45.375913   41166 cri.go:89] found id: ""
	I1009 18:31:45.375926   41166 logs.go:282] 0 containers: []
	W1009 18:31:45.375932   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:45.375936   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:45.375974   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:45.402366   41166 cri.go:89] found id: ""
	I1009 18:31:45.402380   41166 logs.go:282] 0 containers: []
	W1009 18:31:45.402386   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:45.402391   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:45.402432   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:45.428247   41166 cri.go:89] found id: ""
	I1009 18:31:45.428263   41166 logs.go:282] 0 containers: []
	W1009 18:31:45.428272   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:45.428278   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:45.428332   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:45.454072   41166 cri.go:89] found id: ""
	I1009 18:31:45.454087   41166 logs.go:282] 0 containers: []
	W1009 18:31:45.454094   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:45.454103   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:45.454173   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:45.479985   41166 cri.go:89] found id: ""
	I1009 18:31:45.480000   41166 logs.go:282] 0 containers: []
	W1009 18:31:45.480006   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:45.480012   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:45.480064   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:45.505956   41166 cri.go:89] found id: ""
	I1009 18:31:45.505972   41166 logs.go:282] 0 containers: []
	W1009 18:31:45.505980   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:45.505986   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:45.506041   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:45.530757   41166 cri.go:89] found id: ""
	I1009 18:31:45.530770   41166 logs.go:282] 0 containers: []
	W1009 18:31:45.530775   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:45.530782   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:45.530791   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:45.597676   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:45.597693   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:45.609290   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:45.609305   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:45.666583   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:45.659856   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.660431   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.661987   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.662451   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.663976   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:45.659856   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.660431   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.661987   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.662451   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.663976   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:45.666593   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:45.666604   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:45.730000   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:45.730018   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:48.259768   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:48.270482   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:48.270528   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:48.297438   41166 cri.go:89] found id: ""
	I1009 18:31:48.297454   41166 logs.go:282] 0 containers: []
	W1009 18:31:48.297462   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:48.297467   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:48.297510   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:48.323680   41166 cri.go:89] found id: ""
	I1009 18:31:48.323695   41166 logs.go:282] 0 containers: []
	W1009 18:31:48.323704   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:48.323710   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:48.323756   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:48.348422   41166 cri.go:89] found id: ""
	I1009 18:31:48.348437   41166 logs.go:282] 0 containers: []
	W1009 18:31:48.348445   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:48.348450   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:48.348507   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:48.373232   41166 cri.go:89] found id: ""
	I1009 18:31:48.373247   41166 logs.go:282] 0 containers: []
	W1009 18:31:48.373253   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:48.373263   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:48.373306   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:48.398755   41166 cri.go:89] found id: ""
	I1009 18:31:48.398770   41166 logs.go:282] 0 containers: []
	W1009 18:31:48.398776   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:48.398781   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:48.398822   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:48.423977   41166 cri.go:89] found id: ""
	I1009 18:31:48.423993   41166 logs.go:282] 0 containers: []
	W1009 18:31:48.423999   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:48.424004   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:48.424056   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:48.450473   41166 cri.go:89] found id: ""
	I1009 18:31:48.450486   41166 logs.go:282] 0 containers: []
	W1009 18:31:48.450492   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:48.450499   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:48.450510   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:48.461974   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:48.461997   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:48.519875   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:48.513250   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.513778   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.515240   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.515817   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.517350   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:48.513250   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.513778   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.515240   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.515817   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.517350   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:48.519884   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:48.519893   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:48.579801   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:48.579819   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:48.609008   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:48.609031   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:51.179735   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:51.190623   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:51.190689   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:51.215839   41166 cri.go:89] found id: ""
	I1009 18:31:51.215854   41166 logs.go:282] 0 containers: []
	W1009 18:31:51.215860   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:51.215866   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:51.215919   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:51.241754   41166 cri.go:89] found id: ""
	I1009 18:31:51.241771   41166 logs.go:282] 0 containers: []
	W1009 18:31:51.241781   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:51.241786   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:51.241834   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:51.269204   41166 cri.go:89] found id: ""
	I1009 18:31:51.269221   41166 logs.go:282] 0 containers: []
	W1009 18:31:51.269227   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:51.269233   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:51.269288   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:51.296498   41166 cri.go:89] found id: ""
	I1009 18:31:51.296514   41166 logs.go:282] 0 containers: []
	W1009 18:31:51.296522   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:51.296527   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:51.296573   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:51.323034   41166 cri.go:89] found id: ""
	I1009 18:31:51.323049   41166 logs.go:282] 0 containers: []
	W1009 18:31:51.323057   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:51.323063   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:51.323112   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:51.348104   41166 cri.go:89] found id: ""
	I1009 18:31:51.348119   41166 logs.go:282] 0 containers: []
	W1009 18:31:51.348125   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:51.348131   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:51.348199   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:51.374228   41166 cri.go:89] found id: ""
	I1009 18:31:51.374242   41166 logs.go:282] 0 containers: []
	W1009 18:31:51.374248   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:51.374255   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:51.374265   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:51.403810   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:51.403825   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:51.474611   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:51.474630   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:51.486750   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:51.486766   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:51.542637   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:51.535796   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.536370   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.537923   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.538394   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.539906   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:51.535796   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.536370   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.537923   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.538394   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.539906   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:51.542656   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:51.542666   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:54.103184   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:54.114409   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:54.114455   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:54.140634   41166 cri.go:89] found id: ""
	I1009 18:31:54.140646   41166 logs.go:282] 0 containers: []
	W1009 18:31:54.140652   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:54.140656   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:54.140703   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:54.166896   41166 cri.go:89] found id: ""
	I1009 18:31:54.166911   41166 logs.go:282] 0 containers: []
	W1009 18:31:54.166918   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:54.166922   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:54.166962   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:54.193155   41166 cri.go:89] found id: ""
	I1009 18:31:54.193170   41166 logs.go:282] 0 containers: []
	W1009 18:31:54.193176   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:54.193181   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:54.193222   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:54.217754   41166 cri.go:89] found id: ""
	I1009 18:31:54.217767   41166 logs.go:282] 0 containers: []
	W1009 18:31:54.217772   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:54.217777   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:54.217819   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:54.243823   41166 cri.go:89] found id: ""
	I1009 18:31:54.243837   41166 logs.go:282] 0 containers: []
	W1009 18:31:54.243843   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:54.243848   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:54.243887   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:54.271827   41166 cri.go:89] found id: ""
	I1009 18:31:54.271841   41166 logs.go:282] 0 containers: []
	W1009 18:31:54.271847   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:54.271852   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:54.271895   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:54.297907   41166 cri.go:89] found id: ""
	I1009 18:31:54.297920   41166 logs.go:282] 0 containers: []
	W1009 18:31:54.297925   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:54.297932   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:54.297942   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:54.365493   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:54.365510   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:54.377258   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:54.377275   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:54.432221   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:54.425355   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.425907   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.427547   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.427972   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.429614   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:54.425355   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.425907   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.427547   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.427972   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.429614   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:54.432234   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:54.432244   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:54.492172   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:54.492189   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:57.022444   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:57.033223   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:57.033285   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:57.059246   41166 cri.go:89] found id: ""
	I1009 18:31:57.059267   41166 logs.go:282] 0 containers: []
	W1009 18:31:57.059273   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:57.059277   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:57.059348   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:57.084187   41166 cri.go:89] found id: ""
	I1009 18:31:57.084199   41166 logs.go:282] 0 containers: []
	W1009 18:31:57.084205   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:57.084209   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:57.084250   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:57.109765   41166 cri.go:89] found id: ""
	I1009 18:31:57.109778   41166 logs.go:282] 0 containers: []
	W1009 18:31:57.109784   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:57.109788   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:57.109828   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:57.135796   41166 cri.go:89] found id: ""
	I1009 18:31:57.135809   41166 logs.go:282] 0 containers: []
	W1009 18:31:57.135817   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:57.135824   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:57.136027   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:57.162702   41166 cri.go:89] found id: ""
	I1009 18:31:57.162715   41166 logs.go:282] 0 containers: []
	W1009 18:31:57.162720   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:57.162724   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:57.162773   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:57.189575   41166 cri.go:89] found id: ""
	I1009 18:31:57.189588   41166 logs.go:282] 0 containers: []
	W1009 18:31:57.189594   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:57.189598   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:57.189639   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:57.214916   41166 cri.go:89] found id: ""
	I1009 18:31:57.214931   41166 logs.go:282] 0 containers: []
	W1009 18:31:57.214939   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:57.214946   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:57.214956   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:57.226333   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:57.226347   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:57.282176   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:57.275375   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.275847   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.277403   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.277780   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.279430   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:57.275375   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.275847   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.277403   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.277780   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.279430   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:57.282186   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:57.282196   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:57.341981   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:57.341999   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:57.372028   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:57.372043   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:59.940902   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:59.951810   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:59.951853   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:59.977888   41166 cri.go:89] found id: ""
	I1009 18:31:59.977902   41166 logs.go:282] 0 containers: []
	W1009 18:31:59.977908   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:59.977912   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:59.977977   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:32:00.004236   41166 cri.go:89] found id: ""
	I1009 18:32:00.004252   41166 logs.go:282] 0 containers: []
	W1009 18:32:00.004265   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:32:00.004293   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:32:00.004347   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:32:00.030808   41166 cri.go:89] found id: ""
	I1009 18:32:00.030826   41166 logs.go:282] 0 containers: []
	W1009 18:32:00.030836   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:32:00.030842   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:32:00.030895   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:32:00.056760   41166 cri.go:89] found id: ""
	I1009 18:32:00.056772   41166 logs.go:282] 0 containers: []
	W1009 18:32:00.056778   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:32:00.056782   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:32:00.056826   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:32:00.083048   41166 cri.go:89] found id: ""
	I1009 18:32:00.083062   41166 logs.go:282] 0 containers: []
	W1009 18:32:00.083068   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:32:00.083072   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:32:00.083116   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:32:00.109679   41166 cri.go:89] found id: ""
	I1009 18:32:00.109693   41166 logs.go:282] 0 containers: []
	W1009 18:32:00.109699   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:32:00.109704   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:32:00.109753   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:32:00.135808   41166 cri.go:89] found id: ""
	I1009 18:32:00.135820   41166 logs.go:282] 0 containers: []
	W1009 18:32:00.135826   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:32:00.135833   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:32:00.135841   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:32:00.192719   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:32:00.185431   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.185945   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.187601   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.188147   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.189704   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:32:00.185431   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.185945   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.187601   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.188147   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.189704   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:32:00.192732   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:32:00.192744   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:32:00.253264   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:32:00.253287   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:32:00.283450   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:32:00.283463   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:32:00.350291   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:32:00.350309   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:32:02.863750   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:32:02.874396   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:32:02.874434   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:32:02.900500   41166 cri.go:89] found id: ""
	I1009 18:32:02.900513   41166 logs.go:282] 0 containers: []
	W1009 18:32:02.900519   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:32:02.900523   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:32:02.900563   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:32:02.926067   41166 cri.go:89] found id: ""
	I1009 18:32:02.926083   41166 logs.go:282] 0 containers: []
	W1009 18:32:02.926092   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:32:02.926099   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:32:02.926157   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:32:02.951112   41166 cri.go:89] found id: ""
	I1009 18:32:02.951127   41166 logs.go:282] 0 containers: []
	W1009 18:32:02.951147   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:32:02.951154   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:32:02.951202   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:32:02.976038   41166 cri.go:89] found id: ""
	I1009 18:32:02.976052   41166 logs.go:282] 0 containers: []
	W1009 18:32:02.976057   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:32:02.976062   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:32:02.976114   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:32:03.001712   41166 cri.go:89] found id: ""
	I1009 18:32:03.001724   41166 logs.go:282] 0 containers: []
	W1009 18:32:03.001730   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:32:03.001734   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:32:03.001773   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:32:03.028181   41166 cri.go:89] found id: ""
	I1009 18:32:03.028195   41166 logs.go:282] 0 containers: []
	W1009 18:32:03.028201   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:32:03.028205   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:32:03.028247   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:32:03.054529   41166 cri.go:89] found id: ""
	I1009 18:32:03.054541   41166 logs.go:282] 0 containers: []
	W1009 18:32:03.054547   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:32:03.054554   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:32:03.054565   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:32:03.122196   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:32:03.122214   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:32:03.133617   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:32:03.133633   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:32:03.189282   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:32:03.182610   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.183115   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.184674   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.185052   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.186556   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:32:03.182610   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.183115   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.184674   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.185052   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.186556   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:32:03.189291   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:32:03.189301   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:32:03.252856   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:32:03.252874   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:32:05.784812   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:32:05.795352   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:32:05.795402   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:32:05.820276   41166 cri.go:89] found id: ""
	I1009 18:32:05.820289   41166 logs.go:282] 0 containers: []
	W1009 18:32:05.820295   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:32:05.820300   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:32:05.820341   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:32:05.846395   41166 cri.go:89] found id: ""
	I1009 18:32:05.846408   41166 logs.go:282] 0 containers: []
	W1009 18:32:05.846414   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:32:05.846418   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:32:05.846469   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:32:05.872185   41166 cri.go:89] found id: ""
	I1009 18:32:05.872199   41166 logs.go:282] 0 containers: []
	W1009 18:32:05.872205   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:32:05.872209   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:32:05.872254   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:32:05.898231   41166 cri.go:89] found id: ""
	I1009 18:32:05.898251   41166 logs.go:282] 0 containers: []
	W1009 18:32:05.898257   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:32:05.898263   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:32:05.898303   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:32:05.923683   41166 cri.go:89] found id: ""
	I1009 18:32:05.923699   41166 logs.go:282] 0 containers: []
	W1009 18:32:05.923707   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:32:05.923712   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:32:05.923755   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:32:05.949168   41166 cri.go:89] found id: ""
	I1009 18:32:05.949183   41166 logs.go:282] 0 containers: []
	W1009 18:32:05.949188   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:32:05.949193   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:32:05.949236   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:32:05.975320   41166 cri.go:89] found id: ""
	I1009 18:32:05.975332   41166 logs.go:282] 0 containers: []
	W1009 18:32:05.975338   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:32:05.975344   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:32:05.975354   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:32:06.041809   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:32:06.041827   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:32:06.054016   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:32:06.054040   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:32:06.110078   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:32:06.103223   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.103767   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.105448   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.105875   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.107466   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:32:06.103223   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.103767   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.105448   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.105875   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.107466   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:32:06.110088   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:32:06.110097   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:32:06.172545   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:32:06.172564   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:32:08.701488   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:32:08.712540   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:32:08.712594   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:32:08.738583   41166 cri.go:89] found id: ""
	I1009 18:32:08.738601   41166 logs.go:282] 0 containers: []
	W1009 18:32:08.738608   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:32:08.738613   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:32:08.738654   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:32:08.764379   41166 cri.go:89] found id: ""
	I1009 18:32:08.764396   41166 logs.go:282] 0 containers: []
	W1009 18:32:08.764404   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:32:08.764412   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:32:08.764466   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:32:08.790325   41166 cri.go:89] found id: ""
	I1009 18:32:08.790351   41166 logs.go:282] 0 containers: []
	W1009 18:32:08.790360   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:32:08.790367   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:32:08.790417   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:32:08.816765   41166 cri.go:89] found id: ""
	I1009 18:32:08.816780   41166 logs.go:282] 0 containers: []
	W1009 18:32:08.816788   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:32:08.816792   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:32:08.816844   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:32:08.842038   41166 cri.go:89] found id: ""
	I1009 18:32:08.842050   41166 logs.go:282] 0 containers: []
	W1009 18:32:08.842055   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:32:08.842060   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:32:08.842119   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:32:08.868221   41166 cri.go:89] found id: ""
	I1009 18:32:08.868236   41166 logs.go:282] 0 containers: []
	W1009 18:32:08.868243   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:32:08.868248   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:32:08.868291   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:32:08.894780   41166 cri.go:89] found id: ""
	I1009 18:32:08.894797   41166 logs.go:282] 0 containers: []
	W1009 18:32:08.894804   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:32:08.894810   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:32:08.894820   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:32:08.952094   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:32:08.944952   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.945523   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.947209   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.947687   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.949320   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:32:08.944952   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.945523   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.947209   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.947687   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.949320   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:32:08.952107   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:32:08.952121   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:32:09.012751   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:32:09.012769   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:32:09.042946   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:32:09.042958   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:32:09.111059   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:32:09.111076   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:32:11.624407   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:32:11.635246   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:32:11.635303   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:32:11.661128   41166 cri.go:89] found id: ""
	I1009 18:32:11.661159   41166 logs.go:282] 0 containers: []
	W1009 18:32:11.661167   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:32:11.661173   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:32:11.661225   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:32:11.685846   41166 cri.go:89] found id: ""
	I1009 18:32:11.685860   41166 logs.go:282] 0 containers: []
	W1009 18:32:11.685866   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:32:11.685870   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:32:11.685909   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:32:11.711700   41166 cri.go:89] found id: ""
	I1009 18:32:11.711714   41166 logs.go:282] 0 containers: []
	W1009 18:32:11.711719   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:32:11.711723   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:32:11.711770   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:32:11.737208   41166 cri.go:89] found id: ""
	I1009 18:32:11.737220   41166 logs.go:282] 0 containers: []
	W1009 18:32:11.737225   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:32:11.737230   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:32:11.737278   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:32:11.762359   41166 cri.go:89] found id: ""
	I1009 18:32:11.762370   41166 logs.go:282] 0 containers: []
	W1009 18:32:11.762376   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:32:11.762380   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:32:11.762430   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:32:11.787996   41166 cri.go:89] found id: ""
	I1009 18:32:11.788011   41166 logs.go:282] 0 containers: []
	W1009 18:32:11.788019   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:32:11.788024   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:32:11.788084   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:32:11.812657   41166 cri.go:89] found id: ""
	I1009 18:32:11.812671   41166 logs.go:282] 0 containers: []
	W1009 18:32:11.812677   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:32:11.812685   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:32:11.812694   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:32:11.879681   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:32:11.879697   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:32:11.891109   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:32:11.891124   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:32:11.947646   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:32:11.940720   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.941253   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.942799   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.943257   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.944825   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:32:11.940720   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.941253   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.942799   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.943257   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.944825   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:32:11.947659   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:32:11.947672   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:32:12.013733   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:32:12.013750   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:32:14.545559   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:32:14.556586   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:32:14.556634   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:32:14.584233   41166 cri.go:89] found id: ""
	I1009 18:32:14.584250   41166 logs.go:282] 0 containers: []
	W1009 18:32:14.584258   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:32:14.584263   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:32:14.584312   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:32:14.610477   41166 cri.go:89] found id: ""
	I1009 18:32:14.610493   41166 logs.go:282] 0 containers: []
	W1009 18:32:14.610500   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:32:14.610505   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:32:14.610560   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:32:14.635807   41166 cri.go:89] found id: ""
	I1009 18:32:14.635824   41166 logs.go:282] 0 containers: []
	W1009 18:32:14.635832   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:32:14.635837   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:32:14.635880   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:32:14.661016   41166 cri.go:89] found id: ""
	I1009 18:32:14.661034   41166 logs.go:282] 0 containers: []
	W1009 18:32:14.661043   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:32:14.661049   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:32:14.661098   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:32:14.689198   41166 cri.go:89] found id: ""
	I1009 18:32:14.689212   41166 logs.go:282] 0 containers: []
	W1009 18:32:14.689217   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:32:14.689223   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:32:14.689278   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:32:14.714892   41166 cri.go:89] found id: ""
	I1009 18:32:14.714908   41166 logs.go:282] 0 containers: []
	W1009 18:32:14.714917   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:32:14.714923   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:32:14.714971   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:32:14.740412   41166 cri.go:89] found id: ""
	I1009 18:32:14.740425   41166 logs.go:282] 0 containers: []
	W1009 18:32:14.740433   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:32:14.740440   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:32:14.740449   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:32:14.803421   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:32:14.803439   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:32:14.831580   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:32:14.831594   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:32:14.901628   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:32:14.901653   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:32:14.914304   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:32:14.914326   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:32:14.971146   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:32:14.964264   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.964764   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.966352   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.966731   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.968402   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:32:14.964264   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.964764   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.966352   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.966731   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.968402   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:32:17.472817   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:32:17.483574   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:32:17.483619   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:32:17.510868   41166 cri.go:89] found id: ""
	I1009 18:32:17.510882   41166 logs.go:282] 0 containers: []
	W1009 18:32:17.510891   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:32:17.510896   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:32:17.510956   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:32:17.537306   41166 cri.go:89] found id: ""
	I1009 18:32:17.537319   41166 logs.go:282] 0 containers: []
	W1009 18:32:17.537325   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:32:17.537329   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:32:17.537372   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:32:17.564957   41166 cri.go:89] found id: ""
	I1009 18:32:17.564972   41166 logs.go:282] 0 containers: []
	W1009 18:32:17.564978   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:32:17.564984   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:32:17.565039   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:32:17.591401   41166 cri.go:89] found id: ""
	I1009 18:32:17.591418   41166 logs.go:282] 0 containers: []
	W1009 18:32:17.591425   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:32:17.591430   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:32:17.591476   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:32:17.617237   41166 cri.go:89] found id: ""
	I1009 18:32:17.617250   41166 logs.go:282] 0 containers: []
	W1009 18:32:17.617256   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:32:17.617260   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:32:17.617302   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:32:17.642328   41166 cri.go:89] found id: ""
	I1009 18:32:17.642342   41166 logs.go:282] 0 containers: []
	W1009 18:32:17.642348   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:32:17.642352   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:32:17.642400   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:32:17.668302   41166 cri.go:89] found id: ""
	I1009 18:32:17.668315   41166 logs.go:282] 0 containers: []
	W1009 18:32:17.668321   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:32:17.668327   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:32:17.668336   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:32:17.679448   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:32:17.679463   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:32:17.736174   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:32:17.728959   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.729672   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.731395   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.731844   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.733446   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:32:17.728959   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.729672   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.731395   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.731844   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.733446   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:32:17.736227   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:32:17.736236   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:32:17.795423   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:32:17.795442   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:32:17.824553   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:32:17.824567   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:32:20.394282   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:32:20.405003   41166 kubeadm.go:601] duration metric: took 4m2.649024916s to restartPrimaryControlPlane
	W1009 18:32:20.405078   41166 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1009 18:32:20.405162   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 18:32:20.850567   41166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:32:20.863734   41166 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:32:20.872360   41166 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:32:20.872401   41166 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:32:20.880727   41166 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:32:20.880752   41166 kubeadm.go:157] found existing configuration files:
	
	I1009 18:32:20.880802   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1009 18:32:20.888758   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:32:20.888797   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:32:20.896370   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1009 18:32:20.904128   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:32:20.904188   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:32:20.911725   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1009 18:32:20.919740   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:32:20.919783   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:32:20.927592   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1009 18:32:20.935300   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:32:20.935348   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:32:20.942573   41166 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:32:20.998838   41166 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:32:21.055610   41166 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:36:23.829821   41166 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1009 18:36:23.829939   41166 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:36:23.832833   41166 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:36:23.832899   41166 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:36:23.833001   41166 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:36:23.833078   41166 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:36:23.833131   41166 kubeadm.go:318] OS: Linux
	I1009 18:36:23.833211   41166 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:36:23.833255   41166 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:36:23.833293   41166 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:36:23.833332   41166 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:36:23.833371   41166 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:36:23.833408   41166 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:36:23.833452   41166 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:36:23.833487   41166 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:36:23.833563   41166 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:36:23.833644   41166 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:36:23.833715   41166 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:36:23.833763   41166 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:36:23.836738   41166 out.go:252]   - Generating certificates and keys ...
	I1009 18:36:23.836809   41166 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:36:23.836876   41166 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:36:23.836946   41166 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 18:36:23.836995   41166 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 18:36:23.837054   41166 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 18:36:23.837106   41166 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 18:36:23.837180   41166 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 18:36:23.837230   41166 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 18:36:23.837295   41166 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 18:36:23.837361   41166 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 18:36:23.837391   41166 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 18:36:23.837444   41166 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:36:23.837485   41166 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:36:23.837544   41166 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:36:23.837590   41166 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:36:23.837644   41166 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:36:23.837687   41166 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:36:23.837754   41166 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:36:23.837807   41166 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:36:23.840574   41166 out.go:252]   - Booting up control plane ...
	I1009 18:36:23.840651   41166 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:36:23.840709   41166 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:36:23.840759   41166 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:36:23.840847   41166 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:36:23.840933   41166 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:36:23.841023   41166 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:36:23.841122   41166 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:36:23.841176   41166 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:36:23.841286   41166 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:36:23.841382   41166 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:36:23.841430   41166 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500920961s
	I1009 18:36:23.841508   41166 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:36:23.841575   41166 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1009 18:36:23.841650   41166 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:36:23.841721   41166 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:36:23.841779   41166 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000193088s
	I1009 18:36:23.841844   41166 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000216272s
	I1009 18:36:23.841921   41166 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000612564s
	I1009 18:36:23.841927   41166 kubeadm.go:318] 
	I1009 18:36:23.842001   41166 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:36:23.842071   41166 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:36:23.842160   41166 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:36:23.842237   41166 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:36:23.842297   41166 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:36:23.842366   41166 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:36:23.842394   41166 kubeadm.go:318] 
	W1009 18:36:23.842478   41166 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500920961s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000193088s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000216272s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000612564s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 18:36:23.842555   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 18:36:24.285465   41166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:36:24.298222   41166 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:36:24.298276   41166 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:36:24.306625   41166 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:36:24.306635   41166 kubeadm.go:157] found existing configuration files:
	
	I1009 18:36:24.306675   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1009 18:36:24.314710   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:36:24.314750   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:36:24.322418   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1009 18:36:24.330123   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:36:24.330187   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:36:24.337953   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1009 18:36:24.346125   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:36:24.346179   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:36:24.354153   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1009 18:36:24.362094   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:36:24.362133   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:36:24.369784   41166 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:36:24.426834   41166 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:36:24.485641   41166 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:40:27.797583   41166 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 18:40:27.797662   41166 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:40:27.800620   41166 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:40:27.800659   41166 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:40:27.800736   41166 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:40:27.800783   41166 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:40:27.800811   41166 kubeadm.go:318] OS: Linux
	I1009 18:40:27.800847   41166 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:40:27.800885   41166 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:40:27.800924   41166 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:40:27.800985   41166 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:40:27.801052   41166 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:40:27.801090   41166 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:40:27.801156   41166 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:40:27.801201   41166 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:40:27.801265   41166 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:40:27.801343   41166 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:40:27.801412   41166 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:40:27.801484   41166 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:40:27.805055   41166 out.go:252]   - Generating certificates and keys ...
	I1009 18:40:27.805120   41166 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:40:27.805218   41166 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:40:27.805293   41166 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 18:40:27.805339   41166 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 18:40:27.805412   41166 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 18:40:27.805457   41166 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 18:40:27.805510   41166 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 18:40:27.805564   41166 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 18:40:27.805620   41166 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 18:40:27.805693   41166 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 18:40:27.805748   41166 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 18:40:27.805808   41166 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:40:27.805852   41166 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:40:27.805907   41166 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:40:27.805950   41166 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:40:27.805998   41166 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:40:27.806045   41166 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:40:27.806113   41166 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:40:27.806212   41166 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:40:27.807603   41166 out.go:252]   - Booting up control plane ...
	I1009 18:40:27.807673   41166 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:40:27.807748   41166 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:40:27.807805   41166 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:40:27.807888   41166 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:40:27.807967   41166 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:40:27.808054   41166 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:40:27.808118   41166 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:40:27.808182   41166 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:40:27.808282   41166 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:40:27.808373   41166 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:40:27.808424   41166 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000969803s
	I1009 18:40:27.808512   41166 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:40:27.808585   41166 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1009 18:40:27.808667   41166 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:40:27.808740   41166 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:40:27.808798   41166 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000410729s
	I1009 18:40:27.808855   41166 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000637307s
	I1009 18:40:27.808919   41166 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000528535s
	I1009 18:40:27.808921   41166 kubeadm.go:318] 
	I1009 18:40:27.808989   41166 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:40:27.809052   41166 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:40:27.809124   41166 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:40:27.809239   41166 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:40:27.809297   41166 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:40:27.809386   41166 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:40:27.809399   41166 kubeadm.go:318] 
	I1009 18:40:27.809438   41166 kubeadm.go:402] duration metric: took 12m10.090749097s to StartCluster
	I1009 18:40:27.809468   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:40:27.809513   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:40:27.837743   41166 cri.go:89] found id: ""
	I1009 18:40:27.837757   41166 logs.go:282] 0 containers: []
	W1009 18:40:27.837763   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:40:27.837768   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:40:27.837814   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:40:27.863718   41166 cri.go:89] found id: ""
	I1009 18:40:27.863732   41166 logs.go:282] 0 containers: []
	W1009 18:40:27.863738   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:40:27.863748   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:40:27.863792   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:40:27.889900   41166 cri.go:89] found id: ""
	I1009 18:40:27.889914   41166 logs.go:282] 0 containers: []
	W1009 18:40:27.889920   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:40:27.889924   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:40:27.889980   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:40:27.916941   41166 cri.go:89] found id: ""
	I1009 18:40:27.916954   41166 logs.go:282] 0 containers: []
	W1009 18:40:27.916960   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:40:27.916965   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:40:27.917024   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:40:27.943791   41166 cri.go:89] found id: ""
	I1009 18:40:27.943804   41166 logs.go:282] 0 containers: []
	W1009 18:40:27.943809   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:40:27.943814   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:40:27.943860   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:40:27.970612   41166 cri.go:89] found id: ""
	I1009 18:40:27.970625   41166 logs.go:282] 0 containers: []
	W1009 18:40:27.970631   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:40:27.970635   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:40:27.970683   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:40:27.997688   41166 cri.go:89] found id: ""
	I1009 18:40:27.997700   41166 logs.go:282] 0 containers: []
	W1009 18:40:27.997706   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:40:27.997713   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:40:27.997721   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:40:28.064711   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:40:28.064730   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:40:28.076960   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:40:28.076978   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:40:28.135195   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:40:28.128400   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.128940   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.130597   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.131014   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.132350   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:40:28.128400   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.128940   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.130597   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.131014   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.132350   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:40:28.135206   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:40:28.135216   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:40:28.194198   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:40:28.194216   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 18:40:28.224308   41166 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000969803s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000410729s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000637307s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000528535s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 18:40:28.224355   41166 out.go:285] * 
	W1009 18:40:28.224482   41166 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000969803s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000410729s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000637307s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000528535s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:40:28.224505   41166 out.go:285] * 
	W1009 18:40:28.226335   41166 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:40:28.230950   41166 out.go:203] 
	W1009 18:40:28.232526   41166 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000969803s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000410729s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000637307s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000528535s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:40:28.232549   41166 out.go:285] * 
	I1009 18:40:28.235189   41166 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 18:40:29 functional-753440 crio[5806]: time="2025-10-09T18:40:29.56001007Z" level=info msg="createCtr: removing container a06ac9363965b653d64f09237aa7b9409e3fbd97a9719eef8873b5e27c9a2291" id=38ca3084-3e46-45a5-bcc8-36519726e888 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:29 functional-753440 crio[5806]: time="2025-10-09T18:40:29.560045273Z" level=info msg="createCtr: deleting container a06ac9363965b653d64f09237aa7b9409e3fbd97a9719eef8873b5e27c9a2291 from storage" id=38ca3084-3e46-45a5-bcc8-36519726e888 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:29 functional-753440 crio[5806]: time="2025-10-09T18:40:29.562455923Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-753440_kube-system_894f77eb6f96f2cc2bf4bdca611e7cdb_0" id=38ca3084-3e46-45a5-bcc8-36519726e888 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:31 functional-753440 crio[5806]: time="2025-10-09T18:40:31.536482041Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=ab7fe81f-8ca6-4783-97fa-1f8f5b5b69b6 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:31 functional-753440 crio[5806]: time="2025-10-09T18:40:31.537585954Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=c9a14339-cbc8-4d33-a435-b9d963fbc47c name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:31 functional-753440 crio[5806]: time="2025-10-09T18:40:31.538722204Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-753440/kube-controller-manager" id=ee5871e3-ac61-4e86-9eb0-6b504f80e66a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:31 functional-753440 crio[5806]: time="2025-10-09T18:40:31.538998993Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:31 functional-753440 crio[5806]: time="2025-10-09T18:40:31.543561387Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:31 functional-753440 crio[5806]: time="2025-10-09T18:40:31.544174518Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:31 functional-753440 crio[5806]: time="2025-10-09T18:40:31.560337135Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ee5871e3-ac61-4e86-9eb0-6b504f80e66a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:31 functional-753440 crio[5806]: time="2025-10-09T18:40:31.561844887Z" level=info msg="createCtr: deleting container ID b2f541e56cb88cf290e567f92b134c3f0309e932679af93777171378d1d056b3 from idIndex" id=ee5871e3-ac61-4e86-9eb0-6b504f80e66a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:31 functional-753440 crio[5806]: time="2025-10-09T18:40:31.561898246Z" level=info msg="createCtr: removing container b2f541e56cb88cf290e567f92b134c3f0309e932679af93777171378d1d056b3" id=ee5871e3-ac61-4e86-9eb0-6b504f80e66a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:31 functional-753440 crio[5806]: time="2025-10-09T18:40:31.561965515Z" level=info msg="createCtr: deleting container b2f541e56cb88cf290e567f92b134c3f0309e932679af93777171378d1d056b3 from storage" id=ee5871e3-ac61-4e86-9eb0-6b504f80e66a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:31 functional-753440 crio[5806]: time="2025-10-09T18:40:31.564636874Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-753440_kube-system_ddd5b817e547272bbbe5e6f0c16b8e98_0" id=ee5871e3-ac61-4e86-9eb0-6b504f80e66a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:38 functional-753440 crio[5806]: time="2025-10-09T18:40:38.53581855Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=7f2f2ea2-28c1-4712-859b-6d70b9159779 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:38 functional-753440 crio[5806]: time="2025-10-09T18:40:38.536863108Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=38a02a26-8601-4577-8d43-4759942eb05e name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:38 functional-753440 crio[5806]: time="2025-10-09T18:40:38.537852498Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-753440/kube-scheduler" id=52aa5b8b-0adf-4df1-a673-3ee3ec7c0f98 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:38 functional-753440 crio[5806]: time="2025-10-09T18:40:38.538177262Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:38 functional-753440 crio[5806]: time="2025-10-09T18:40:38.542609345Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:38 functional-753440 crio[5806]: time="2025-10-09T18:40:38.543198099Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:38 functional-753440 crio[5806]: time="2025-10-09T18:40:38.563248829Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=52aa5b8b-0adf-4df1-a673-3ee3ec7c0f98 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:38 functional-753440 crio[5806]: time="2025-10-09T18:40:38.565221375Z" level=info msg="createCtr: deleting container ID 970d77427d058ae677890aae16f32f0ef12c03f061492ba66333ac2e36b139dd from idIndex" id=52aa5b8b-0adf-4df1-a673-3ee3ec7c0f98 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:38 functional-753440 crio[5806]: time="2025-10-09T18:40:38.56529506Z" level=info msg="createCtr: removing container 970d77427d058ae677890aae16f32f0ef12c03f061492ba66333ac2e36b139dd" id=52aa5b8b-0adf-4df1-a673-3ee3ec7c0f98 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:38 functional-753440 crio[5806]: time="2025-10-09T18:40:38.565341102Z" level=info msg="createCtr: deleting container 970d77427d058ae677890aae16f32f0ef12c03f061492ba66333ac2e36b139dd from storage" id=52aa5b8b-0adf-4df1-a673-3ee3ec7c0f98 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:38 functional-753440 crio[5806]: time="2025-10-09T18:40:38.568258438Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-753440_kube-system_c3332277da3037b9d30e61510b9fdccb_0" id=52aa5b8b-0adf-4df1-a673-3ee3ec7c0f98 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:40:39.869268   16835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:39.869727   16835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:39.871302   16835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:39.871811   16835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:39.873473   16835 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:40:39 up  1:23,  0 user,  load average: 0.41, 0.12, 0.09
	Linux functional-753440 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 18:40:29 functional-753440 kubelet[14909]: E1009 18:40:29.564212   14909 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-753440" podUID="894f77eb6f96f2cc2bf4bdca611e7cdb"
	Oct 09 18:40:31 functional-753440 kubelet[14909]: E1009 18:40:31.159164   14909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-753440?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 09 18:40:31 functional-753440 kubelet[14909]: I1009 18:40:31.315674   14909 kubelet_node_status.go:75] "Attempting to register node" node="functional-753440"
	Oct 09 18:40:31 functional-753440 kubelet[14909]: E1009 18:40:31.316034   14909 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-753440"
	Oct 09 18:40:31 functional-753440 kubelet[14909]: E1009 18:40:31.344233   14909 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-753440.186ce67effdfc72b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-753440,UID:functional-753440,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-753440 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-753440,},FirstTimestamp:2025-10-09 18:36:27.528144683 +0000 UTC m=+0.734831963,LastTimestamp:2025-10-09 18:36:27.528144683 +0000 UTC m=+0.734831963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-753440,}"
	Oct 09 18:40:31 functional-753440 kubelet[14909]: E1009 18:40:31.535991   14909 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753440\" not found" node="functional-753440"
	Oct 09 18:40:31 functional-753440 kubelet[14909]: E1009 18:40:31.564978   14909 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:40:31 functional-753440 kubelet[14909]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:31 functional-753440 kubelet[14909]:  > podSandboxID="fb34d4f739975f6378a39e225741fb0e80fac36aeda99b2080b81999ee15d221"
	Oct 09 18:40:31 functional-753440 kubelet[14909]: E1009 18:40:31.565115   14909 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:40:31 functional-753440 kubelet[14909]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-753440_kube-system(ddd5b817e547272bbbe5e6f0c16b8e98): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:31 functional-753440 kubelet[14909]:  > logger="UnhandledError"
	Oct 09 18:40:31 functional-753440 kubelet[14909]: E1009 18:40:31.565167   14909 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-753440" podUID="ddd5b817e547272bbbe5e6f0c16b8e98"
	Oct 09 18:40:37 functional-753440 kubelet[14909]: E1009 18:40:37.551324   14909 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-753440\" not found"
	Oct 09 18:40:38 functional-753440 kubelet[14909]: E1009 18:40:38.160552   14909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-753440?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 09 18:40:38 functional-753440 kubelet[14909]: I1009 18:40:38.318081   14909 kubelet_node_status.go:75] "Attempting to register node" node="functional-753440"
	Oct 09 18:40:38 functional-753440 kubelet[14909]: E1009 18:40:38.318473   14909 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-753440"
	Oct 09 18:40:38 functional-753440 kubelet[14909]: E1009 18:40:38.535312   14909 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753440\" not found" node="functional-753440"
	Oct 09 18:40:38 functional-753440 kubelet[14909]: E1009 18:40:38.568853   14909 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:40:38 functional-753440 kubelet[14909]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:38 functional-753440 kubelet[14909]:  > podSandboxID="7a4353736f4a4433982204579f641a25b7ce51b570588adf77ed233c5025e9dc"
	Oct 09 18:40:38 functional-753440 kubelet[14909]: E1009 18:40:38.569084   14909 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:40:38 functional-753440 kubelet[14909]:         container kube-scheduler start failed in pod kube-scheduler-functional-753440_kube-system(c3332277da3037b9d30e61510b9fdccb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:38 functional-753440 kubelet[14909]:  > logger="UnhandledError"
	Oct 09 18:40:38 functional-753440 kubelet[14909]: E1009 18:40:38.569148   14909 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-753440" podUID="c3332277da3037b9d30e61510b9fdccb"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753440 -n functional-753440
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753440 -n functional-753440: exit status 2 (327.592967ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753440" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (2.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-753440 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-753440 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (52.7077ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-753440 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-753440 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-753440 describe po hello-node-connect: exit status 1 (52.387067ms)

                                                
                                                
** stderr ** 
	E1009 18:40:35.993644   57106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:35.994041   57106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:35.995441   57106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:35.995866   57106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:35.997385   57106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-753440 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-753440 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-753440 logs -l app=hello-node-connect: exit status 1 (61.193506ms)

                                                
                                                
** stderr ** 
	E1009 18:40:36.055247   57119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:36.055612   57119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:36.057039   57119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:36.057306   57119 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-753440 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-753440 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-753440 describe svc hello-node-connect: exit status 1 (67.86594ms)

                                                
                                                
** stderr ** 
	E1009 18:40:36.121654   57131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:36.122223   57131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:36.124346   57131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:36.124788   57131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:36.126252   57131 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-753440 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753440
helpers_test.go:243: (dbg) docker inspect functional-753440:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205",
	        "Created": "2025-10-09T18:13:38.612842612Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 29511,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:13:38.64668907Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/hostname",
	        "HostsPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/hosts",
	        "LogPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205-json.log",
	        "Name": "/functional-753440",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753440:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-753440",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205",
	                "LowerDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753440",
	                "Source": "/var/lib/docker/volumes/functional-753440/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753440",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753440",
	                "name.minikube.sigs.k8s.io": "functional-753440",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d81e656cb7fd298b6be7b84ddafb7e6d0b2df1b9904e1c444b24eb780385409d",
	            "SandboxKey": "/var/run/docker/netns/d81e656cb7fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753440": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:52:a9:f3:ce:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d69cee380b2506f35d197ee18a95b90b110e191b547e1220873c5484ffc92ad3",
	                    "EndpointID": "2f780bc31b7359d4036c8b32e09c7f7657923ca8c46e8392506706282465c3ec",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753440",
	                        "694bf539948e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-753440 -n functional-753440
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-753440 -n functional-753440: exit status 2 (328.915177ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 logs -n 25
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ functional-753440 kubectl -- --context functional-753440 get pods                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │                     │
	│ start   │ -p functional-753440 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                  │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:28 UTC │                     │
	│ config  │ functional-753440 config unset cpus                                                                                       │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ config  │ functional-753440 config get cpus                                                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ service │ functional-753440 service list                                                                                            │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ config  │ functional-753440 config set cpus 2                                                                                       │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ config  │ functional-753440 config get cpus                                                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ config  │ functional-753440 config unset cpus                                                                                       │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ config  │ functional-753440 config get cpus                                                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ ssh     │ functional-753440 ssh -n functional-753440 sudo cat /home/docker/cp-test.txt                                              │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh     │ functional-753440 ssh echo hello                                                                                          │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ service │ functional-753440 service list -o json                                                                                    │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ tunnel  │ functional-753440 tunnel --alsologtostderr                                                                                │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ tunnel  │ functional-753440 tunnel --alsologtostderr                                                                                │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ cp      │ functional-753440 cp functional-753440:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd806855305/001/cp-test.txt │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh     │ functional-753440 ssh cat /etc/hostname                                                                                   │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ service │ functional-753440 service --namespace=default --https --url hello-node                                                    │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ tunnel  │ functional-753440 tunnel --alsologtostderr                                                                                │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ ssh     │ functional-753440 ssh -n functional-753440 sudo cat /home/docker/cp-test.txt                                              │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ service │ functional-753440 service hello-node --url --format={{.IP}}                                                               │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ service │ functional-753440 service hello-node --url                                                                                │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ cp      │ functional-753440 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                 │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh     │ functional-753440 ssh -n functional-753440 sudo cat /tmp/does/not/exist/cp-test.txt                                       │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ addons  │ functional-753440 addons list                                                                                             │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ addons  │ functional-753440 addons list -o json                                                                                     │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:28:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:28:14.121358   41166 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:28:14.121581   41166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:28:14.121584   41166 out.go:374] Setting ErrFile to fd 2...
	I1009 18:28:14.121587   41166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:28:14.121762   41166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:28:14.122238   41166 out.go:368] Setting JSON to false
	I1009 18:28:14.123079   41166 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4242,"bootTime":1760030252,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:28:14.123169   41166 start.go:141] virtualization: kvm guest
	I1009 18:28:14.126034   41166 out.go:179] * [functional-753440] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:28:14.127592   41166 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:28:14.127614   41166 notify.go:220] Checking for updates...
	I1009 18:28:14.130226   41166 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:28:14.131542   41166 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:28:14.132869   41166 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:28:14.134010   41166 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:28:14.135272   41166 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:28:14.137002   41166 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:28:14.137147   41166 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:28:14.160624   41166 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:28:14.160747   41166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:28:14.216904   41166 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-09 18:28:14.207579982 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:28:14.216988   41166 docker.go:318] overlay module found
	I1009 18:28:14.218985   41166 out.go:179] * Using the docker driver based on existing profile
	I1009 18:28:14.220343   41166 start.go:305] selected driver: docker
	I1009 18:28:14.220350   41166 start.go:925] validating driver "docker" against &{Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:28:14.220421   41166 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:28:14.220493   41166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:28:14.276259   41166 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-09 18:28:14.266635533 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:28:14.276841   41166 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:28:14.276862   41166 cni.go:84] Creating CNI manager for ""
	I1009 18:28:14.276912   41166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:28:14.276975   41166 start.go:349] cluster config:
	{Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:28:14.279613   41166 out.go:179] * Starting "functional-753440" primary control-plane node in "functional-753440" cluster
	I1009 18:28:14.281054   41166 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:28:14.282608   41166 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:28:14.283987   41166 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:28:14.284021   41166 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:28:14.284028   41166 cache.go:64] Caching tarball of preloaded images
	I1009 18:28:14.284084   41166 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:28:14.284156   41166 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:28:14.284167   41166 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:28:14.284262   41166 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/config.json ...
	I1009 18:28:14.304989   41166 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 18:28:14.304998   41166 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 18:28:14.305012   41166 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:28:14.305037   41166 start.go:360] acquireMachinesLock for functional-753440: {Name:mka6dd10318522f9d68a16550e4b04812fa22004 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:28:14.305103   41166 start.go:364] duration metric: took 53.763µs to acquireMachinesLock for "functional-753440"
	I1009 18:28:14.305117   41166 start.go:96] Skipping create...Using existing machine configuration
	I1009 18:28:14.305123   41166 fix.go:54] fixHost starting: 
	I1009 18:28:14.305350   41166 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
	I1009 18:28:14.322441   41166 fix.go:112] recreateIfNeeded on functional-753440: state=Running err=<nil>
	W1009 18:28:14.322475   41166 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 18:28:14.324442   41166 out.go:252] * Updating the running docker "functional-753440" container ...
	I1009 18:28:14.324473   41166 machine.go:93] provisionDockerMachine start ...
	I1009 18:28:14.324533   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:14.341338   41166 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:14.341548   41166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:28:14.341554   41166 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:28:14.486226   41166 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753440
	
	I1009 18:28:14.486250   41166 ubuntu.go:182] provisioning hostname "functional-753440"
	I1009 18:28:14.486345   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:14.504505   41166 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:14.504708   41166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:28:14.504715   41166 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-753440 && echo "functional-753440" | sudo tee /etc/hostname
	I1009 18:28:14.659579   41166 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753440
	
	I1009 18:28:14.659644   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:14.677783   41166 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:14.677973   41166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:28:14.677983   41166 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753440' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753440/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753440' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:28:14.823918   41166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:28:14.823946   41166 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 18:28:14.823965   41166 ubuntu.go:190] setting up certificates
	I1009 18:28:14.823972   41166 provision.go:84] configureAuth start
	I1009 18:28:14.824015   41166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753440
	I1009 18:28:14.841567   41166 provision.go:143] copyHostCerts
	I1009 18:28:14.841617   41166 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 18:28:14.841630   41166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:28:14.841693   41166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 18:28:14.841773   41166 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 18:28:14.841776   41166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:28:14.841800   41166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 18:28:14.841852   41166 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 18:28:14.841854   41166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:28:14.841874   41166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 18:28:14.841914   41166 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.functional-753440 san=[127.0.0.1 192.168.49.2 functional-753440 localhost minikube]
	I1009 18:28:14.981751   41166 provision.go:177] copyRemoteCerts
	I1009 18:28:14.981793   41166 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:28:14.981823   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:14.999896   41166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:28:15.102707   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 18:28:15.120896   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 18:28:15.138889   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:28:15.156869   41166 provision.go:87] duration metric: took 332.885748ms to configureAuth
	I1009 18:28:15.156885   41166 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:28:15.157034   41166 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:28:15.157151   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:15.175195   41166 main.go:141] libmachine: Using SSH client type: native
	I1009 18:28:15.175399   41166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1009 18:28:15.175409   41166 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:28:15.452446   41166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:28:15.452465   41166 machine.go:96] duration metric: took 1.127985417s to provisionDockerMachine
	I1009 18:28:15.452477   41166 start.go:293] postStartSetup for "functional-753440" (driver="docker")
	I1009 18:28:15.452491   41166 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:28:15.452568   41166 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:28:15.452629   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:15.470937   41166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:28:15.575864   41166 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:28:15.579955   41166 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:28:15.579971   41166 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:28:15.579990   41166 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 18:28:15.580053   41166 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 18:28:15.580152   41166 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 18:28:15.580226   41166 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/test/nested/copy/14880/hosts -> hosts in /etc/test/nested/copy/14880
	I1009 18:28:15.580265   41166 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/14880
	I1009 18:28:15.588947   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:28:15.607328   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/test/nested/copy/14880/hosts --> /etc/test/nested/copy/14880/hosts (40 bytes)
	I1009 18:28:15.625331   41166 start.go:296] duration metric: took 172.840814ms for postStartSetup
	I1009 18:28:15.625414   41166 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:28:15.625450   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:15.644868   41166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:28:15.745460   41166 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:28:15.750036   41166 fix.go:56] duration metric: took 1.444904813s for fixHost
	I1009 18:28:15.750054   41166 start.go:83] releasing machines lock for "functional-753440", held for 1.444944565s
	I1009 18:28:15.750113   41166 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-753440
	I1009 18:28:15.768383   41166 ssh_runner.go:195] Run: cat /version.json
	I1009 18:28:15.768426   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:15.768462   41166 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:28:15.768509   41166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:28:15.787244   41166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:28:15.788794   41166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:28:15.887419   41166 ssh_runner.go:195] Run: systemctl --version
	I1009 18:28:15.939267   41166 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:28:15.975115   41166 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:28:15.980039   41166 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:28:15.980121   41166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:28:15.988843   41166 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 18:28:15.988855   41166 start.go:495] detecting cgroup driver to use...
	I1009 18:28:15.988896   41166 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:28:15.988937   41166 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:28:16.003980   41166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:28:16.017315   41166 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:28:16.017382   41166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:28:16.032779   41166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:28:16.045881   41166 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:28:16.126678   41166 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:28:16.213883   41166 docker.go:234] disabling docker service ...
	I1009 18:28:16.213927   41166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:28:16.229180   41166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:28:16.242501   41166 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:28:16.328471   41166 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:28:16.418726   41166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:28:16.432452   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:28:16.447044   41166 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:28:16.447090   41166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:16.456711   41166 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 18:28:16.456763   41166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:16.466740   41166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:16.476505   41166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:16.485804   41166 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:28:16.494457   41166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:16.504131   41166 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:16.513460   41166 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:28:16.522986   41166 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:28:16.531036   41166 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:28:16.539288   41166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:28:16.625799   41166 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:28:16.734227   41166 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:28:16.734392   41166 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:28:16.738753   41166 start.go:563] Will wait 60s for crictl version
	I1009 18:28:16.738810   41166 ssh_runner.go:195] Run: which crictl
	I1009 18:28:16.742485   41166 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:28:16.767659   41166 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:28:16.767722   41166 ssh_runner.go:195] Run: crio --version
	I1009 18:28:16.796602   41166 ssh_runner.go:195] Run: crio --version
	I1009 18:28:16.826463   41166 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:28:16.827844   41166 cli_runner.go:164] Run: docker network inspect functional-753440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:28:16.845122   41166 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:28:16.851283   41166 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1009 18:28:16.852593   41166 kubeadm.go:883] updating cluster {Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:28:16.852703   41166 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:28:16.852758   41166 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:28:16.885854   41166 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:28:16.885865   41166 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:28:16.885914   41166 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:28:16.911537   41166 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:28:16.911549   41166 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:28:16.911554   41166 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1009 18:28:16.911659   41166 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-753440 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:28:16.911716   41166 ssh_runner.go:195] Run: crio config
	I1009 18:28:16.959392   41166 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1009 18:28:16.959415   41166 cni.go:84] Creating CNI manager for ""
	I1009 18:28:16.959431   41166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:28:16.959447   41166 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:28:16.959474   41166 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753440 NodeName:functional-753440 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:28:16.959581   41166 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-753440"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:28:16.959637   41166 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:28:16.967720   41166 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:28:16.967786   41166 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:28:16.975557   41166 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1009 18:28:16.988463   41166 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:28:17.001726   41166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1009 18:28:17.014711   41166 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 18:28:17.018916   41166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:28:17.102967   41166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:28:17.116133   41166 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440 for IP: 192.168.49.2
	I1009 18:28:17.116168   41166 certs.go:195] generating shared ca certs ...
	I1009 18:28:17.116186   41166 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:28:17.116310   41166 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 18:28:17.116344   41166 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 18:28:17.116350   41166 certs.go:257] generating profile certs ...
	I1009 18:28:17.116439   41166 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.key
	I1009 18:28:17.116473   41166 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.key.01289d3a
	I1009 18:28:17.116504   41166 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.key
	I1009 18:28:17.116599   41166 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 18:28:17.116623   41166 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 18:28:17.116628   41166 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:28:17.116647   41166 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 18:28:17.116699   41166 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:28:17.116718   41166 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 18:28:17.116754   41166 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:28:17.117319   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:28:17.135881   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:28:17.153983   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:28:17.171867   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:28:17.189721   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 18:28:17.208056   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:28:17.226995   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:28:17.245251   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:28:17.263239   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 18:28:17.281041   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 18:28:17.298701   41166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:28:17.316541   41166 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:28:17.329669   41166 ssh_runner.go:195] Run: openssl version
	I1009 18:28:17.335820   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:28:17.344631   41166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:17.348564   41166 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:17.348610   41166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:28:17.382973   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:28:17.391446   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 18:28:17.399936   41166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 18:28:17.403644   41166 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:28:17.403697   41166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 18:28:17.438115   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 18:28:17.446527   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 18:28:17.455201   41166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 18:28:17.459043   41166 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:28:17.459093   41166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 18:28:17.494448   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:28:17.503208   41166 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:28:17.507381   41166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 18:28:17.542560   41166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 18:28:17.577279   41166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 18:28:17.612414   41166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 18:28:17.648669   41166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 18:28:17.684353   41166 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 18:28:17.718697   41166 kubeadm.go:400] StartCluster: {Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:28:17.718762   41166 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:28:17.718816   41166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:28:17.747722   41166 cri.go:89] found id: ""
	I1009 18:28:17.747771   41166 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:28:17.755951   41166 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 18:28:17.755970   41166 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 18:28:17.756013   41166 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 18:28:17.763739   41166 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:28:17.764201   41166 kubeconfig.go:125] found "functional-753440" server: "https://192.168.49.2:8441"
	I1009 18:28:17.765394   41166 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 18:28:17.773512   41166 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-09 18:13:46.132659514 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-09 18:28:17.012910366 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1009 18:28:17.773526   41166 kubeadm.go:1160] stopping kube-system containers ...
	I1009 18:28:17.773536   41166 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 18:28:17.773573   41166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:28:17.801424   41166 cri.go:89] found id: ""
	I1009 18:28:17.801491   41166 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 18:28:17.844900   41166 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:28:17.853365   41166 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5635 Oct  9 18:17 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct  9 18:17 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct  9 18:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct  9 18:17 /etc/kubernetes/scheduler.conf
	
	I1009 18:28:17.853413   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1009 18:28:17.861284   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1009 18:28:17.869531   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:28:17.869582   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:28:17.877552   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1009 18:28:17.885384   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:28:17.885430   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:28:17.893514   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1009 18:28:17.901554   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:28:17.901605   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:28:17.910046   41166 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:28:17.918503   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:28:17.960612   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:28:19.029109   41166 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.068473628s)
	I1009 18:28:19.029180   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:28:19.195034   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:28:19.243702   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 18:28:19.294305   41166 api_server.go:52] waiting for apiserver process to appear ...
	I1009 18:28:19.294364   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:19.794527   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:20.295201   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:20.794575   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:21.295315   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:21.795156   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:22.294825   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:22.794676   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:23.295341   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:23.795290   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:24.295084   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:24.794558   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:25.295301   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:25.794886   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:26.295362   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:26.795204   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:27.295068   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:27.794501   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:28.295278   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:28.795020   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:29.294945   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:29.795382   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:30.294824   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:30.794608   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:31.295203   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:31.795244   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:32.294545   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:32.794712   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:33.294432   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:33.795152   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:34.294924   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:34.794572   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:35.295260   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:35.794912   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:36.294546   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:36.795240   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:37.294721   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:37.794468   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:38.295324   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:38.795118   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:39.295123   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:39.795377   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:40.294883   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:40.795163   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:41.294810   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:41.794568   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:42.295334   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:42.795216   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:43.294867   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:43.794631   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:44.294584   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:44.795416   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:45.294988   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:45.795459   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:46.295344   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:46.794912   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:47.294535   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:47.795297   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:48.294813   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:48.794435   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:49.295044   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:49.794820   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:50.294561   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:50.795171   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:51.295301   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:51.794820   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:52.295356   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:52.795166   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:53.294824   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:53.795465   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:54.295177   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:54.794443   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:55.294528   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:55.794977   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:56.294481   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:56.795276   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:57.295436   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:57.795235   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:58.294498   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:58.794950   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:59.294720   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:28:59.794600   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:00.295262   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:00.794624   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:01.294757   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:01.794835   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:02.294745   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:02.795101   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:03.295356   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:03.794515   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:04.294776   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:04.794940   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:05.295069   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:05.794648   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:06.294527   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:06.794749   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:07.294659   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:07.795339   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:08.295340   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:08.795175   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:09.294617   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:09.795133   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:10.295346   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:10.795313   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:11.295322   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:11.794750   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:12.294795   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:12.794516   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:13.295074   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:13.794456   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:14.294872   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:14.794437   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:15.294584   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:15.794709   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:16.295308   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:16.795334   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:17.294662   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:17.795191   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:18.294594   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:18.794871   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:19.295378   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:19.295433   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:19.321387   41166 cri.go:89] found id: ""
	I1009 18:29:19.321402   41166 logs.go:282] 0 containers: []
	W1009 18:29:19.321411   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:19.321418   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:19.321468   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:19.348366   41166 cri.go:89] found id: ""
	I1009 18:29:19.348380   41166 logs.go:282] 0 containers: []
	W1009 18:29:19.348387   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:19.348391   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:19.348435   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:19.374894   41166 cri.go:89] found id: ""
	I1009 18:29:19.374906   41166 logs.go:282] 0 containers: []
	W1009 18:29:19.374912   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:19.374916   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:19.374955   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:19.401088   41166 cri.go:89] found id: ""
	I1009 18:29:19.401106   41166 logs.go:282] 0 containers: []
	W1009 18:29:19.401114   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:19.401121   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:19.401191   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:19.428021   41166 cri.go:89] found id: ""
	I1009 18:29:19.428033   41166 logs.go:282] 0 containers: []
	W1009 18:29:19.428043   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:19.428047   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:19.428105   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:19.454576   41166 cri.go:89] found id: ""
	I1009 18:29:19.454590   41166 logs.go:282] 0 containers: []
	W1009 18:29:19.454595   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:19.454599   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:19.454639   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:19.480743   41166 cri.go:89] found id: ""
	I1009 18:29:19.480760   41166 logs.go:282] 0 containers: []
	W1009 18:29:19.480767   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:19.480774   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:19.480783   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:19.509728   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:19.509743   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:19.578764   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:19.578781   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:19.590528   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:19.590544   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:19.646752   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:19.639577    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.640309    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.641990    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.642451    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.643983    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:19.639577    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.640309    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.641990    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.642451    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:19.643983    6674 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:19.646773   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:19.646784   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:22.208868   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:22.219498   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:22.219549   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:22.245808   41166 cri.go:89] found id: ""
	I1009 18:29:22.245825   41166 logs.go:282] 0 containers: []
	W1009 18:29:22.245833   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:22.245839   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:22.245884   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:22.271240   41166 cri.go:89] found id: ""
	I1009 18:29:22.271253   41166 logs.go:282] 0 containers: []
	W1009 18:29:22.271259   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:22.271263   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:22.271301   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:22.299626   41166 cri.go:89] found id: ""
	I1009 18:29:22.299641   41166 logs.go:282] 0 containers: []
	W1009 18:29:22.299650   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:22.299656   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:22.299699   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:22.326461   41166 cri.go:89] found id: ""
	I1009 18:29:22.326473   41166 logs.go:282] 0 containers: []
	W1009 18:29:22.326479   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:22.326484   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:22.326526   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:22.352237   41166 cri.go:89] found id: ""
	I1009 18:29:22.352253   41166 logs.go:282] 0 containers: []
	W1009 18:29:22.352264   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:22.352268   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:22.352316   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:22.378255   41166 cri.go:89] found id: ""
	I1009 18:29:22.378268   41166 logs.go:282] 0 containers: []
	W1009 18:29:22.378276   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:22.378297   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:22.378351   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:22.403983   41166 cri.go:89] found id: ""
	I1009 18:29:22.403999   41166 logs.go:282] 0 containers: []
	W1009 18:29:22.404006   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:22.404013   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:22.404024   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:22.470710   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:22.470727   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:22.482584   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:22.482599   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:22.536359   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:22.529981    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.530412    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.531972    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.532353    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.533814    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:22.529981    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.530412    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.531972    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.532353    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:22.533814    6783 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:22.536380   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:22.536394   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:22.601517   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:22.601533   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:25.128918   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:25.139722   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:25.139766   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:25.165463   41166 cri.go:89] found id: ""
	I1009 18:29:25.165478   41166 logs.go:282] 0 containers: []
	W1009 18:29:25.165486   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:25.165490   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:25.165537   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:25.190387   41166 cri.go:89] found id: ""
	I1009 18:29:25.190400   41166 logs.go:282] 0 containers: []
	W1009 18:29:25.190407   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:25.190411   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:25.190460   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:25.216675   41166 cri.go:89] found id: ""
	I1009 18:29:25.216690   41166 logs.go:282] 0 containers: []
	W1009 18:29:25.216698   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:25.216703   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:25.216747   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:25.242179   41166 cri.go:89] found id: ""
	I1009 18:29:25.242191   41166 logs.go:282] 0 containers: []
	W1009 18:29:25.242197   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:25.242202   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:25.242248   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:25.267486   41166 cri.go:89] found id: ""
	I1009 18:29:25.267502   41166 logs.go:282] 0 containers: []
	W1009 18:29:25.267511   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:25.267517   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:25.267568   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:25.297914   41166 cri.go:89] found id: ""
	I1009 18:29:25.297930   41166 logs.go:282] 0 containers: []
	W1009 18:29:25.297939   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:25.297945   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:25.298000   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:25.328702   41166 cri.go:89] found id: ""
	I1009 18:29:25.328718   41166 logs.go:282] 0 containers: []
	W1009 18:29:25.328727   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:25.328736   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:25.328747   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:25.395115   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:25.395130   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:25.407227   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:25.407245   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:25.462374   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:25.455561    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.456085    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.457650    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.458100    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.459563    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:25.455561    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.456085    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.457650    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.458100    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:25.459563    6917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:25.462400   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:25.462410   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:25.525388   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:25.525409   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:28.053225   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:28.063873   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:28.063918   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:28.088014   41166 cri.go:89] found id: ""
	I1009 18:29:28.088030   41166 logs.go:282] 0 containers: []
	W1009 18:29:28.088038   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:28.088045   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:28.088091   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:28.114133   41166 cri.go:89] found id: ""
	I1009 18:29:28.114163   41166 logs.go:282] 0 containers: []
	W1009 18:29:28.114172   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:28.114177   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:28.114221   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:28.138995   41166 cri.go:89] found id: ""
	I1009 18:29:28.139007   41166 logs.go:282] 0 containers: []
	W1009 18:29:28.139017   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:28.139022   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:28.139072   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:28.163909   41166 cri.go:89] found id: ""
	I1009 18:29:28.163925   41166 logs.go:282] 0 containers: []
	W1009 18:29:28.163984   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:28.163991   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:28.164032   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:28.190078   41166 cri.go:89] found id: ""
	I1009 18:29:28.190091   41166 logs.go:282] 0 containers: []
	W1009 18:29:28.190096   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:28.190101   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:28.190171   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:28.215236   41166 cri.go:89] found id: ""
	I1009 18:29:28.215251   41166 logs.go:282] 0 containers: []
	W1009 18:29:28.215260   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:28.215265   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:28.215315   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:28.241659   41166 cri.go:89] found id: ""
	I1009 18:29:28.241675   41166 logs.go:282] 0 containers: []
	W1009 18:29:28.241684   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:28.241692   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:28.241701   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:28.312258   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:28.312275   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:28.323979   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:28.323994   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:28.380524   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:28.373568    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.374186    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.375759    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.376203    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.377825    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:28.373568    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.374186    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.375759    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.376203    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:28.377825    7030 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:28.380538   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:28.380547   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:28.442571   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:28.442588   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:30.972438   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:30.983019   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:30.983078   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:31.007563   41166 cri.go:89] found id: ""
	I1009 18:29:31.007577   41166 logs.go:282] 0 containers: []
	W1009 18:29:31.007585   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:31.007591   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:31.007665   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:31.033297   41166 cri.go:89] found id: ""
	I1009 18:29:31.033312   41166 logs.go:282] 0 containers: []
	W1009 18:29:31.033320   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:31.033326   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:31.033381   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:31.058733   41166 cri.go:89] found id: ""
	I1009 18:29:31.058748   41166 logs.go:282] 0 containers: []
	W1009 18:29:31.058756   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:31.058761   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:31.058815   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:31.084119   41166 cri.go:89] found id: ""
	I1009 18:29:31.084133   41166 logs.go:282] 0 containers: []
	W1009 18:29:31.084156   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:31.084162   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:31.084206   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:31.109429   41166 cri.go:89] found id: ""
	I1009 18:29:31.109442   41166 logs.go:282] 0 containers: []
	W1009 18:29:31.109448   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:31.109452   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:31.109510   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:31.135299   41166 cri.go:89] found id: ""
	I1009 18:29:31.135312   41166 logs.go:282] 0 containers: []
	W1009 18:29:31.135322   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:31.135328   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:31.135413   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:31.162606   41166 cri.go:89] found id: ""
	I1009 18:29:31.162621   41166 logs.go:282] 0 containers: []
	W1009 18:29:31.162636   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:31.162643   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:31.162652   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:31.230506   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:31.230556   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:31.241809   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:31.241825   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:31.297388   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:31.290563    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.291088    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.292644    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.293059    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.294666    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:31.290563    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.291088    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.292644    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.293059    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:31.294666    7144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:31.297398   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:31.297413   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:31.361486   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:31.361502   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:33.891238   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:33.902005   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:33.902060   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:33.927598   41166 cri.go:89] found id: ""
	I1009 18:29:33.927612   41166 logs.go:282] 0 containers: []
	W1009 18:29:33.927618   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:33.927622   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:33.927673   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:33.952038   41166 cri.go:89] found id: ""
	I1009 18:29:33.952053   41166 logs.go:282] 0 containers: []
	W1009 18:29:33.952061   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:33.952066   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:33.952145   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:33.976526   41166 cri.go:89] found id: ""
	I1009 18:29:33.976541   41166 logs.go:282] 0 containers: []
	W1009 18:29:33.976549   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:33.976556   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:33.976610   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:34.003219   41166 cri.go:89] found id: ""
	I1009 18:29:34.003234   41166 logs.go:282] 0 containers: []
	W1009 18:29:34.003242   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:34.003247   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:34.003330   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:34.029762   41166 cri.go:89] found id: ""
	I1009 18:29:34.029775   41166 logs.go:282] 0 containers: []
	W1009 18:29:34.029781   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:34.029785   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:34.029840   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:34.054085   41166 cri.go:89] found id: ""
	I1009 18:29:34.054097   41166 logs.go:282] 0 containers: []
	W1009 18:29:34.054107   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:34.054112   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:34.054179   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:34.080890   41166 cri.go:89] found id: ""
	I1009 18:29:34.080903   41166 logs.go:282] 0 containers: []
	W1009 18:29:34.080909   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:34.080915   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:34.080926   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:34.110411   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:34.110426   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:34.181234   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:34.181254   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:34.192758   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:34.192772   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:34.248477   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:34.241375    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.241950    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.243535    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.244000    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.245566    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:34.241375    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.241950    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.243535    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.244000    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:34.245566    7286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:34.248486   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:34.248496   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:36.816158   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:36.827291   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:36.827356   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:36.851760   41166 cri.go:89] found id: ""
	I1009 18:29:36.851775   41166 logs.go:282] 0 containers: []
	W1009 18:29:36.851783   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:36.851789   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:36.851843   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:36.877217   41166 cri.go:89] found id: ""
	I1009 18:29:36.877231   41166 logs.go:282] 0 containers: []
	W1009 18:29:36.877238   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:36.877243   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:36.877284   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:36.902388   41166 cri.go:89] found id: ""
	I1009 18:29:36.902401   41166 logs.go:282] 0 containers: []
	W1009 18:29:36.902407   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:36.902411   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:36.902450   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:36.927658   41166 cri.go:89] found id: ""
	I1009 18:29:36.927673   41166 logs.go:282] 0 containers: []
	W1009 18:29:36.927679   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:36.927683   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:36.927735   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:36.952663   41166 cri.go:89] found id: ""
	I1009 18:29:36.952681   41166 logs.go:282] 0 containers: []
	W1009 18:29:36.952688   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:36.952692   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:36.952731   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:36.977753   41166 cri.go:89] found id: ""
	I1009 18:29:36.977768   41166 logs.go:282] 0 containers: []
	W1009 18:29:36.977774   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:36.977779   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:36.977819   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:37.002782   41166 cri.go:89] found id: ""
	I1009 18:29:37.002796   41166 logs.go:282] 0 containers: []
	W1009 18:29:37.002801   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:37.002807   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:37.002816   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:37.069710   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:37.069726   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:37.081854   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:37.081876   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:37.136826   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:37.130447    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.130883    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.132410    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.132756    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.134175    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:37.130447    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.130883    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.132410    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.132756    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:37.134175    7391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:37.136835   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:37.136844   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:37.201251   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:37.201270   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:39.729692   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:39.740542   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:39.740597   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:39.766240   41166 cri.go:89] found id: ""
	I1009 18:29:39.766255   41166 logs.go:282] 0 containers: []
	W1009 18:29:39.766263   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:39.766269   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:39.766330   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:39.792273   41166 cri.go:89] found id: ""
	I1009 18:29:39.792289   41166 logs.go:282] 0 containers: []
	W1009 18:29:39.792298   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:39.792304   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:39.792360   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:39.818498   41166 cri.go:89] found id: ""
	I1009 18:29:39.818513   41166 logs.go:282] 0 containers: []
	W1009 18:29:39.818521   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:39.818526   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:39.818580   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:39.844118   41166 cri.go:89] found id: ""
	I1009 18:29:39.844131   41166 logs.go:282] 0 containers: []
	W1009 18:29:39.844155   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:39.844161   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:39.844204   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:39.870849   41166 cri.go:89] found id: ""
	I1009 18:29:39.870862   41166 logs.go:282] 0 containers: []
	W1009 18:29:39.870868   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:39.870872   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:39.870911   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:39.896931   41166 cri.go:89] found id: ""
	I1009 18:29:39.896944   41166 logs.go:282] 0 containers: []
	W1009 18:29:39.896949   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:39.896954   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:39.896996   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:39.923519   41166 cri.go:89] found id: ""
	I1009 18:29:39.923531   41166 logs.go:282] 0 containers: []
	W1009 18:29:39.923537   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:39.923544   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:39.923553   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:39.990863   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:39.990880   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:40.002519   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:40.002534   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:40.059328   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:40.052153    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.052750    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.054419    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.054856    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.056426    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:40.052153    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.052750    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.054419    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.054856    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:40.056426    7530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:40.059339   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:40.059349   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:40.125328   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:40.125345   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:42.656004   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:42.666452   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:42.666495   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:42.691012   41166 cri.go:89] found id: ""
	I1009 18:29:42.691027   41166 logs.go:282] 0 containers: []
	W1009 18:29:42.691037   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:42.691043   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:42.691086   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:42.715311   41166 cri.go:89] found id: ""
	I1009 18:29:42.715327   41166 logs.go:282] 0 containers: []
	W1009 18:29:42.715335   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:42.715346   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:42.715385   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:42.741564   41166 cri.go:89] found id: ""
	I1009 18:29:42.741577   41166 logs.go:282] 0 containers: []
	W1009 18:29:42.741584   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:42.741590   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:42.741639   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:42.765961   41166 cri.go:89] found id: ""
	I1009 18:29:42.765974   41166 logs.go:282] 0 containers: []
	W1009 18:29:42.765980   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:42.765985   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:42.766027   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:42.792117   41166 cri.go:89] found id: ""
	I1009 18:29:42.792129   41166 logs.go:282] 0 containers: []
	W1009 18:29:42.792149   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:42.792155   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:42.792208   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:42.817726   41166 cri.go:89] found id: ""
	I1009 18:29:42.817738   41166 logs.go:282] 0 containers: []
	W1009 18:29:42.817745   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:42.817749   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:42.817799   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:42.842806   41166 cri.go:89] found id: ""
	I1009 18:29:42.842823   41166 logs.go:282] 0 containers: []
	W1009 18:29:42.842829   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:42.842836   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:42.842850   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:42.908734   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:42.908751   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:42.919767   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:42.919780   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:42.975159   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:42.968444    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.969012    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.970635    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.971181    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.972729    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:42.968444    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.969012    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.970635    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.971181    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:42.972729    7644 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:42.975170   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:42.975181   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:43.041463   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:43.041480   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:45.571837   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:45.582376   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:45.582431   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:45.608198   41166 cri.go:89] found id: ""
	I1009 18:29:45.608211   41166 logs.go:282] 0 containers: []
	W1009 18:29:45.608217   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:45.608221   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:45.608286   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:45.635099   41166 cri.go:89] found id: ""
	I1009 18:29:45.635112   41166 logs.go:282] 0 containers: []
	W1009 18:29:45.635118   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:45.635126   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:45.635182   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:45.660701   41166 cri.go:89] found id: ""
	I1009 18:29:45.660714   41166 logs.go:282] 0 containers: []
	W1009 18:29:45.660720   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:45.660725   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:45.660765   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:45.686907   41166 cri.go:89] found id: ""
	I1009 18:29:45.686920   41166 logs.go:282] 0 containers: []
	W1009 18:29:45.686926   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:45.686931   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:45.686981   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:45.712880   41166 cri.go:89] found id: ""
	I1009 18:29:45.712893   41166 logs.go:282] 0 containers: []
	W1009 18:29:45.712899   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:45.712902   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:45.712941   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:45.738114   41166 cri.go:89] found id: ""
	I1009 18:29:45.738128   41166 logs.go:282] 0 containers: []
	W1009 18:29:45.738147   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:45.738155   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:45.738200   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:45.764157   41166 cri.go:89] found id: ""
	I1009 18:29:45.764172   41166 logs.go:282] 0 containers: []
	W1009 18:29:45.764178   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:45.764187   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:45.764196   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:45.793189   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:45.793204   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:45.861447   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:45.861463   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:45.872975   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:45.872988   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:45.928792   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:45.921633    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.922319    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.923962    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.924449    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.926072    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:45.921633    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.922319    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.923962    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.924449    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:45.926072    7785 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:45.928810   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:45.928820   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:48.494959   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:48.505724   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:48.505766   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:48.531052   41166 cri.go:89] found id: ""
	I1009 18:29:48.531087   41166 logs.go:282] 0 containers: []
	W1009 18:29:48.531099   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:48.531103   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:48.531167   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:48.555479   41166 cri.go:89] found id: ""
	I1009 18:29:48.555492   41166 logs.go:282] 0 containers: []
	W1009 18:29:48.555498   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:48.555502   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:48.555543   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:48.581427   41166 cri.go:89] found id: ""
	I1009 18:29:48.581444   41166 logs.go:282] 0 containers: []
	W1009 18:29:48.581452   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:48.581460   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:48.581509   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:48.607162   41166 cri.go:89] found id: ""
	I1009 18:29:48.607176   41166 logs.go:282] 0 containers: []
	W1009 18:29:48.607182   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:48.607187   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:48.607235   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:48.632033   41166 cri.go:89] found id: ""
	I1009 18:29:48.632049   41166 logs.go:282] 0 containers: []
	W1009 18:29:48.632058   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:48.632064   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:48.632106   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:48.657205   41166 cri.go:89] found id: ""
	I1009 18:29:48.657218   41166 logs.go:282] 0 containers: []
	W1009 18:29:48.657224   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:48.657229   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:48.657280   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:48.681952   41166 cri.go:89] found id: ""
	I1009 18:29:48.681965   41166 logs.go:282] 0 containers: []
	W1009 18:29:48.681970   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:48.681976   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:48.681986   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:48.751441   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:48.751459   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:48.763252   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:48.763266   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:48.819401   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:48.812637    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.813245    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.814774    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.815273    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.816784    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:48.812637    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.813245    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.814774    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.815273    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:48.816784    7892 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:48.819413   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:48.819426   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:48.882158   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:48.882176   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:51.412646   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:51.423570   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:51.423613   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:51.450043   41166 cri.go:89] found id: ""
	I1009 18:29:51.450058   41166 logs.go:282] 0 containers: []
	W1009 18:29:51.450076   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:51.450081   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:51.450130   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:51.474654   41166 cri.go:89] found id: ""
	I1009 18:29:51.474669   41166 logs.go:282] 0 containers: []
	W1009 18:29:51.474676   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:51.474683   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:51.474721   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:51.500060   41166 cri.go:89] found id: ""
	I1009 18:29:51.500074   41166 logs.go:282] 0 containers: []
	W1009 18:29:51.500079   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:51.500083   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:51.500125   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:51.525095   41166 cri.go:89] found id: ""
	I1009 18:29:51.525110   41166 logs.go:282] 0 containers: []
	W1009 18:29:51.525117   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:51.525128   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:51.525192   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:51.550903   41166 cri.go:89] found id: ""
	I1009 18:29:51.550915   41166 logs.go:282] 0 containers: []
	W1009 18:29:51.550921   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:51.550925   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:51.550963   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:51.576021   41166 cri.go:89] found id: ""
	I1009 18:29:51.576039   41166 logs.go:282] 0 containers: []
	W1009 18:29:51.576045   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:51.576050   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:51.576101   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:51.601302   41166 cri.go:89] found id: ""
	I1009 18:29:51.601331   41166 logs.go:282] 0 containers: []
	W1009 18:29:51.601337   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:51.601345   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:51.601357   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:51.673218   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:51.673234   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:51.684673   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:51.684688   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:51.740747   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:51.733129    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.733652    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.736069    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.736560    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.738067    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:51.733129    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.733652    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.736069    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.736560    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:51.738067    8028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:51.740756   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:51.740765   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:51.804392   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:51.804410   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:54.334647   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:54.345214   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:54.345259   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:54.371054   41166 cri.go:89] found id: ""
	I1009 18:29:54.371070   41166 logs.go:282] 0 containers: []
	W1009 18:29:54.371077   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:54.371081   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:54.371123   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:54.397390   41166 cri.go:89] found id: ""
	I1009 18:29:54.397406   41166 logs.go:282] 0 containers: []
	W1009 18:29:54.397414   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:54.397420   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:54.397469   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:54.423212   41166 cri.go:89] found id: ""
	I1009 18:29:54.423225   41166 logs.go:282] 0 containers: []
	W1009 18:29:54.423231   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:54.423235   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:54.423277   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:54.449723   41166 cri.go:89] found id: ""
	I1009 18:29:54.449738   41166 logs.go:282] 0 containers: []
	W1009 18:29:54.449747   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:54.449753   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:54.449794   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:54.476976   41166 cri.go:89] found id: ""
	I1009 18:29:54.476994   41166 logs.go:282] 0 containers: []
	W1009 18:29:54.476999   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:54.477004   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:54.477056   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:54.502387   41166 cri.go:89] found id: ""
	I1009 18:29:54.502409   41166 logs.go:282] 0 containers: []
	W1009 18:29:54.502419   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:54.502425   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:54.502471   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:54.528021   41166 cri.go:89] found id: ""
	I1009 18:29:54.528037   41166 logs.go:282] 0 containers: []
	W1009 18:29:54.528045   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:54.528053   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:54.528062   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:54.596551   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:54.596569   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:54.607908   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:54.607921   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:54.663274   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:54.655349    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.655928    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.658342    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.658895    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.660440    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:54.655349    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.655928    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.658342    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.658895    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:54.660440    8151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:54.663284   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:54.663296   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:29:54.724548   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:54.724565   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:57.253959   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:29:57.264749   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:29:57.264793   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:29:57.292216   41166 cri.go:89] found id: ""
	I1009 18:29:57.292234   41166 logs.go:282] 0 containers: []
	W1009 18:29:57.292244   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:29:57.292252   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:29:57.292322   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:29:57.320628   41166 cri.go:89] found id: ""
	I1009 18:29:57.320644   41166 logs.go:282] 0 containers: []
	W1009 18:29:57.320657   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:29:57.320663   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:29:57.320711   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:29:57.347524   41166 cri.go:89] found id: ""
	I1009 18:29:57.347541   41166 logs.go:282] 0 containers: []
	W1009 18:29:57.347549   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:29:57.347555   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:29:57.347599   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:29:57.374005   41166 cri.go:89] found id: ""
	I1009 18:29:57.374021   41166 logs.go:282] 0 containers: []
	W1009 18:29:57.374029   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:29:57.374034   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:29:57.374080   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:29:57.398685   41166 cri.go:89] found id: ""
	I1009 18:29:57.398700   41166 logs.go:282] 0 containers: []
	W1009 18:29:57.398706   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:29:57.398710   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:29:57.398758   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:29:57.424224   41166 cri.go:89] found id: ""
	I1009 18:29:57.424237   41166 logs.go:282] 0 containers: []
	W1009 18:29:57.424243   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:29:57.424247   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:29:57.424298   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:29:57.449118   41166 cri.go:89] found id: ""
	I1009 18:29:57.449144   41166 logs.go:282] 0 containers: []
	W1009 18:29:57.449153   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:29:57.449161   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:29:57.449170   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:29:57.477726   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:29:57.477741   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:29:57.549189   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:29:57.549206   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:29:57.560914   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:29:57.560933   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:29:57.615954   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:29:57.609197    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.609718    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.611273    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.611750    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.613311    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:29:57.609197    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.609718    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.611273    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.611750    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:29:57.613311    8289 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:29:57.615970   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:29:57.615980   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:00.177763   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:00.188584   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:00.188628   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:00.214820   41166 cri.go:89] found id: ""
	I1009 18:30:00.214835   41166 logs.go:282] 0 containers: []
	W1009 18:30:00.214844   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:00.214851   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:00.214895   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:00.239376   41166 cri.go:89] found id: ""
	I1009 18:30:00.239393   41166 logs.go:282] 0 containers: []
	W1009 18:30:00.239401   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:00.239407   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:00.239447   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:00.265476   41166 cri.go:89] found id: ""
	I1009 18:30:00.265492   41166 logs.go:282] 0 containers: []
	W1009 18:30:00.265500   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:00.265506   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:00.265556   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:00.291131   41166 cri.go:89] found id: ""
	I1009 18:30:00.291158   41166 logs.go:282] 0 containers: []
	W1009 18:30:00.291167   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:00.291174   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:00.291226   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:00.316623   41166 cri.go:89] found id: ""
	I1009 18:30:00.316636   41166 logs.go:282] 0 containers: []
	W1009 18:30:00.316642   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:00.316646   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:00.316693   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:00.341462   41166 cri.go:89] found id: ""
	I1009 18:30:00.341476   41166 logs.go:282] 0 containers: []
	W1009 18:30:00.341485   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:00.341490   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:00.341531   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:00.366641   41166 cri.go:89] found id: ""
	I1009 18:30:00.366657   41166 logs.go:282] 0 containers: []
	W1009 18:30:00.366663   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:00.366670   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:00.366679   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:00.397505   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:00.397539   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:00.469540   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:00.469557   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:00.481466   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:00.481480   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:00.537449   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:00.530572    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.531116    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.532663    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.533175    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.534723    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:00.530572    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.531116    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.532663    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.533175    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:00.534723    8406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:00.537457   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:00.537466   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:03.107457   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:03.117969   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:03.118030   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:03.144661   41166 cri.go:89] found id: ""
	I1009 18:30:03.144676   41166 logs.go:282] 0 containers: []
	W1009 18:30:03.144684   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:03.144689   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:03.144731   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:03.169819   41166 cri.go:89] found id: ""
	I1009 18:30:03.169832   41166 logs.go:282] 0 containers: []
	W1009 18:30:03.169838   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:03.169842   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:03.169880   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:03.195252   41166 cri.go:89] found id: ""
	I1009 18:30:03.195264   41166 logs.go:282] 0 containers: []
	W1009 18:30:03.195271   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:03.195276   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:03.195319   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:03.221154   41166 cri.go:89] found id: ""
	I1009 18:30:03.221169   41166 logs.go:282] 0 containers: []
	W1009 18:30:03.221176   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:03.221181   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:03.221222   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:03.247656   41166 cri.go:89] found id: ""
	I1009 18:30:03.247670   41166 logs.go:282] 0 containers: []
	W1009 18:30:03.247676   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:03.247680   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:03.247736   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:03.273363   41166 cri.go:89] found id: ""
	I1009 18:30:03.273378   41166 logs.go:282] 0 containers: []
	W1009 18:30:03.273386   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:03.273391   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:03.273439   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:03.297383   41166 cri.go:89] found id: ""
	I1009 18:30:03.297399   41166 logs.go:282] 0 containers: []
	W1009 18:30:03.297407   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:03.297415   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:03.297426   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:03.327096   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:03.327110   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:03.396551   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:03.396569   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:03.408005   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:03.408020   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:03.462643   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:03.456283    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.456846    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.458452    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.458867    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.459996    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:03.456283    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.456846    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.458452    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.458867    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:03.459996    8536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:03.462656   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:03.462667   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:06.023381   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:06.034110   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:06.034175   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:06.059176   41166 cri.go:89] found id: ""
	I1009 18:30:06.059191   41166 logs.go:282] 0 containers: []
	W1009 18:30:06.059197   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:06.059201   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:06.059261   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:06.085110   41166 cri.go:89] found id: ""
	I1009 18:30:06.085126   41166 logs.go:282] 0 containers: []
	W1009 18:30:06.085146   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:06.085154   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:06.085211   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:06.110722   41166 cri.go:89] found id: ""
	I1009 18:30:06.110738   41166 logs.go:282] 0 containers: []
	W1009 18:30:06.110747   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:06.110753   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:06.110806   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:06.136728   41166 cri.go:89] found id: ""
	I1009 18:30:06.136744   41166 logs.go:282] 0 containers: []
	W1009 18:30:06.136752   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:06.136758   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:06.136815   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:06.162322   41166 cri.go:89] found id: ""
	I1009 18:30:06.162337   41166 logs.go:282] 0 containers: []
	W1009 18:30:06.162345   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:06.162351   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:06.162409   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:06.189203   41166 cri.go:89] found id: ""
	I1009 18:30:06.189217   41166 logs.go:282] 0 containers: []
	W1009 18:30:06.189225   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:06.189230   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:06.189374   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:06.215767   41166 cri.go:89] found id: ""
	I1009 18:30:06.215781   41166 logs.go:282] 0 containers: []
	W1009 18:30:06.215790   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:06.215798   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:06.215811   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:06.286131   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:06.286154   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:06.297884   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:06.297899   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:06.354614   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:06.347511    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.348070    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.349662    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.350175    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.351714    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:06.347511    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.348070    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.349662    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.350175    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:06.351714    8640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:06.354625   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:06.354634   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:06.421245   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:06.421263   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:08.950561   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:08.961412   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:08.961461   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:08.985056   41166 cri.go:89] found id: ""
	I1009 18:30:08.985073   41166 logs.go:282] 0 containers: []
	W1009 18:30:08.985081   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:08.985086   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:08.985155   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:09.010161   41166 cri.go:89] found id: ""
	I1009 18:30:09.010177   41166 logs.go:282] 0 containers: []
	W1009 18:30:09.010185   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:09.010190   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:09.010240   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:09.035006   41166 cri.go:89] found id: ""
	I1009 18:30:09.035021   41166 logs.go:282] 0 containers: []
	W1009 18:30:09.035030   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:09.035035   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:09.035079   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:09.059807   41166 cri.go:89] found id: ""
	I1009 18:30:09.059822   41166 logs.go:282] 0 containers: []
	W1009 18:30:09.059831   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:09.059836   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:09.059877   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:09.085467   41166 cri.go:89] found id: ""
	I1009 18:30:09.085482   41166 logs.go:282] 0 containers: []
	W1009 18:30:09.085490   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:09.085495   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:09.085536   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:09.110808   41166 cri.go:89] found id: ""
	I1009 18:30:09.110821   41166 logs.go:282] 0 containers: []
	W1009 18:30:09.110826   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:09.110831   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:09.110869   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:09.135842   41166 cri.go:89] found id: ""
	I1009 18:30:09.135854   41166 logs.go:282] 0 containers: []
	W1009 18:30:09.135860   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:09.135867   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:09.135875   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:09.195931   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:09.195948   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:09.225362   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:09.225375   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:09.296888   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:09.296905   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:09.309206   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:09.309223   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:09.365940   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:09.358751    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.359361    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.360926    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.361520    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.363120    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:09.358751    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.359361    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.360926    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.361520    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:09.363120    8778 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:11.867608   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:11.878320   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:11.878362   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:11.904080   41166 cri.go:89] found id: ""
	I1009 18:30:11.904094   41166 logs.go:282] 0 containers: []
	W1009 18:30:11.904103   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:11.904109   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:11.904175   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:11.930291   41166 cri.go:89] found id: ""
	I1009 18:30:11.930308   41166 logs.go:282] 0 containers: []
	W1009 18:30:11.930327   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:11.930332   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:11.930372   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:11.955946   41166 cri.go:89] found id: ""
	I1009 18:30:11.955959   41166 logs.go:282] 0 containers: []
	W1009 18:30:11.955965   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:11.955970   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:11.956022   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:11.981169   41166 cri.go:89] found id: ""
	I1009 18:30:11.981184   41166 logs.go:282] 0 containers: []
	W1009 18:30:11.981190   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:11.981197   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:11.981254   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:12.006868   41166 cri.go:89] found id: ""
	I1009 18:30:12.006882   41166 logs.go:282] 0 containers: []
	W1009 18:30:12.006890   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:12.006896   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:12.006950   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:12.033045   41166 cri.go:89] found id: ""
	I1009 18:30:12.033062   41166 logs.go:282] 0 containers: []
	W1009 18:30:12.033070   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:12.033076   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:12.033123   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:12.059215   41166 cri.go:89] found id: ""
	I1009 18:30:12.059228   41166 logs.go:282] 0 containers: []
	W1009 18:30:12.059233   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:12.059240   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:12.059249   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:12.088610   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:12.088630   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:12.156730   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:12.156750   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:12.168340   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:12.168354   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:12.224955   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:12.217733    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.218350    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.220045    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.220517    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.222048    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:12.217733    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.218350    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.220045    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.220517    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:12.222048    8899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:12.224965   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:12.224974   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:14.790502   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:14.801228   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:14.801285   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:14.828449   41166 cri.go:89] found id: ""
	I1009 18:30:14.828469   41166 logs.go:282] 0 containers: []
	W1009 18:30:14.828478   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:14.828486   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:14.828539   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:14.854655   41166 cri.go:89] found id: ""
	I1009 18:30:14.854672   41166 logs.go:282] 0 containers: []
	W1009 18:30:14.854681   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:14.854687   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:14.854730   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:14.880081   41166 cri.go:89] found id: ""
	I1009 18:30:14.880103   41166 logs.go:282] 0 containers: []
	W1009 18:30:14.880110   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:14.880119   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:14.880182   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:14.906543   41166 cri.go:89] found id: ""
	I1009 18:30:14.906556   41166 logs.go:282] 0 containers: []
	W1009 18:30:14.906562   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:14.906567   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:14.906607   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:14.932338   41166 cri.go:89] found id: ""
	I1009 18:30:14.932354   41166 logs.go:282] 0 containers: []
	W1009 18:30:14.932360   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:14.932365   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:14.932417   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:14.959648   41166 cri.go:89] found id: ""
	I1009 18:30:14.959661   41166 logs.go:282] 0 containers: []
	W1009 18:30:14.959666   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:14.959670   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:14.959722   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:14.985626   41166 cri.go:89] found id: ""
	I1009 18:30:14.985642   41166 logs.go:282] 0 containers: []
	W1009 18:30:14.985651   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:14.985657   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:14.985667   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:15.059129   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:15.059150   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:15.070684   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:15.070698   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:15.127441   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:15.120544    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.121101    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.122649    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.123113    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.124615    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:15.120544    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.121101    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.122649    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.123113    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:15.124615    9017 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:15.127451   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:15.127462   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:15.188736   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:15.188755   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:17.720548   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:17.731158   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:17.731199   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:17.756463   41166 cri.go:89] found id: ""
	I1009 18:30:17.756478   41166 logs.go:282] 0 containers: []
	W1009 18:30:17.756485   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:17.756489   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:17.756532   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:17.780776   41166 cri.go:89] found id: ""
	I1009 18:30:17.780792   41166 logs.go:282] 0 containers: []
	W1009 18:30:17.780799   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:17.780804   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:17.780845   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:17.805635   41166 cri.go:89] found id: ""
	I1009 18:30:17.805648   41166 logs.go:282] 0 containers: []
	W1009 18:30:17.805654   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:17.805658   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:17.805700   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:17.832060   41166 cri.go:89] found id: ""
	I1009 18:30:17.832074   41166 logs.go:282] 0 containers: []
	W1009 18:30:17.832079   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:17.832084   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:17.832125   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:17.859215   41166 cri.go:89] found id: ""
	I1009 18:30:17.859231   41166 logs.go:282] 0 containers: []
	W1009 18:30:17.859240   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:17.859248   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:17.859299   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:17.884007   41166 cri.go:89] found id: ""
	I1009 18:30:17.884021   41166 logs.go:282] 0 containers: []
	W1009 18:30:17.884027   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:17.884031   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:17.884073   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:17.908524   41166 cri.go:89] found id: ""
	I1009 18:30:17.908537   41166 logs.go:282] 0 containers: []
	W1009 18:30:17.908543   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:17.908550   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:17.908559   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:17.974071   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:17.974088   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:17.985794   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:17.985809   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:18.042658   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:18.035698    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.036247    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.037804    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.038378    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.039940    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:18.035698    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.036247    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.037804    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.038378    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:18.039940    9137 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:18.042678   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:18.042688   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:18.104183   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:18.104201   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:20.634002   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:20.645000   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:20.645074   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:20.671295   41166 cri.go:89] found id: ""
	I1009 18:30:20.671309   41166 logs.go:282] 0 containers: []
	W1009 18:30:20.671320   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:20.671325   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:20.671370   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:20.699380   41166 cri.go:89] found id: ""
	I1009 18:30:20.699393   41166 logs.go:282] 0 containers: []
	W1009 18:30:20.699399   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:20.699404   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:20.699508   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:20.728459   41166 cri.go:89] found id: ""
	I1009 18:30:20.728483   41166 logs.go:282] 0 containers: []
	W1009 18:30:20.728490   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:20.728502   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:20.728571   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:20.755606   41166 cri.go:89] found id: ""
	I1009 18:30:20.755626   41166 logs.go:282] 0 containers: []
	W1009 18:30:20.755637   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:20.755643   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:20.755704   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:20.783272   41166 cri.go:89] found id: ""
	I1009 18:30:20.783285   41166 logs.go:282] 0 containers: []
	W1009 18:30:20.783291   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:20.783295   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:20.783338   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:20.810985   41166 cri.go:89] found id: ""
	I1009 18:30:20.810998   41166 logs.go:282] 0 containers: []
	W1009 18:30:20.811005   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:20.811009   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:20.811090   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:20.838557   41166 cri.go:89] found id: ""
	I1009 18:30:20.838573   41166 logs.go:282] 0 containers: []
	W1009 18:30:20.838580   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:20.838588   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:20.838597   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:20.868656   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:20.868669   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:20.940019   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:20.940041   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:20.952293   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:20.952307   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:21.010202   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:21.003172    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.003783    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.005520    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.006014    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.007633    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:21.003172    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.003783    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.005520    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.006014    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:21.007633    9280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:21.010215   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:21.010228   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:23.575003   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:23.585670   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:23.585721   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:23.611187   41166 cri.go:89] found id: ""
	I1009 18:30:23.611202   41166 logs.go:282] 0 containers: []
	W1009 18:30:23.611208   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:23.611216   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:23.611267   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:23.636952   41166 cri.go:89] found id: ""
	I1009 18:30:23.636966   41166 logs.go:282] 0 containers: []
	W1009 18:30:23.636972   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:23.636977   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:23.637018   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:23.661266   41166 cri.go:89] found id: ""
	I1009 18:30:23.661282   41166 logs.go:282] 0 containers: []
	W1009 18:30:23.661289   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:23.661294   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:23.661343   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:23.687560   41166 cri.go:89] found id: ""
	I1009 18:30:23.687573   41166 logs.go:282] 0 containers: []
	W1009 18:30:23.687578   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:23.687583   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:23.687637   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:23.712015   41166 cri.go:89] found id: ""
	I1009 18:30:23.712031   41166 logs.go:282] 0 containers: []
	W1009 18:30:23.712040   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:23.712046   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:23.712103   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:23.738106   41166 cri.go:89] found id: ""
	I1009 18:30:23.738120   41166 logs.go:282] 0 containers: []
	W1009 18:30:23.738126   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:23.738130   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:23.738191   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:23.764275   41166 cri.go:89] found id: ""
	I1009 18:30:23.764288   41166 logs.go:282] 0 containers: []
	W1009 18:30:23.764307   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:23.764314   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:23.764322   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:23.775354   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:23.775367   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:23.831862   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:23.824872    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.825499    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.827105    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.827605    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.829326    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:23.824872    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.825499    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.827105    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.827605    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:23.829326    9377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:23.831884   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:23.831893   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:23.894598   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:23.894614   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:23.922715   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:23.922731   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:26.494758   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:26.505984   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:26.506076   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:26.532013   41166 cri.go:89] found id: ""
	I1009 18:30:26.532029   41166 logs.go:282] 0 containers: []
	W1009 18:30:26.532037   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:26.532042   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:26.532088   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:26.558247   41166 cri.go:89] found id: ""
	I1009 18:30:26.558278   41166 logs.go:282] 0 containers: []
	W1009 18:30:26.558286   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:26.558290   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:26.558335   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:26.583466   41166 cri.go:89] found id: ""
	I1009 18:30:26.583479   41166 logs.go:282] 0 containers: []
	W1009 18:30:26.583485   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:26.583495   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:26.583536   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:26.611101   41166 cri.go:89] found id: ""
	I1009 18:30:26.611114   41166 logs.go:282] 0 containers: []
	W1009 18:30:26.611126   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:26.611131   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:26.611199   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:26.636533   41166 cri.go:89] found id: ""
	I1009 18:30:26.636547   41166 logs.go:282] 0 containers: []
	W1009 18:30:26.636553   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:26.636557   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:26.636594   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:26.661023   41166 cri.go:89] found id: ""
	I1009 18:30:26.661039   41166 logs.go:282] 0 containers: []
	W1009 18:30:26.661048   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:26.661055   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:26.661103   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:26.686499   41166 cri.go:89] found id: ""
	I1009 18:30:26.686511   41166 logs.go:282] 0 containers: []
	W1009 18:30:26.686518   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:26.686524   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:26.686533   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:26.750968   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:26.750986   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:26.762679   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:26.762697   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:26.819065   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:26.812332    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.812909    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.814580    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.815057    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.816557    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:26.812332    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.812909    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.814580    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.815057    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:26.816557    9505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:26.819088   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:26.819097   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:26.882784   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:26.882801   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:29.411957   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:29.422542   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:29.422590   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:29.448891   41166 cri.go:89] found id: ""
	I1009 18:30:29.448907   41166 logs.go:282] 0 containers: []
	W1009 18:30:29.448916   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:29.448921   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:29.448968   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:29.474806   41166 cri.go:89] found id: ""
	I1009 18:30:29.474823   41166 logs.go:282] 0 containers: []
	W1009 18:30:29.474829   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:29.474834   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:29.474875   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:29.501280   41166 cri.go:89] found id: ""
	I1009 18:30:29.501293   41166 logs.go:282] 0 containers: []
	W1009 18:30:29.501299   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:29.501303   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:29.501344   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:29.528191   41166 cri.go:89] found id: ""
	I1009 18:30:29.528204   41166 logs.go:282] 0 containers: []
	W1009 18:30:29.528210   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:29.528214   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:29.528253   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:29.554786   41166 cri.go:89] found id: ""
	I1009 18:30:29.554799   41166 logs.go:282] 0 containers: []
	W1009 18:30:29.554806   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:29.554811   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:29.554853   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:29.579893   41166 cri.go:89] found id: ""
	I1009 18:30:29.579909   41166 logs.go:282] 0 containers: []
	W1009 18:30:29.579918   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:29.579922   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:29.579965   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:29.605961   41166 cri.go:89] found id: ""
	I1009 18:30:29.605974   41166 logs.go:282] 0 containers: []
	W1009 18:30:29.605983   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:29.605998   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:29.606010   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:29.667811   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:29.667839   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:29.697600   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:29.697622   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:29.767295   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:29.767316   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:29.779348   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:29.779365   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:29.835961   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:29.829223    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.829767    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.831335    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.831758    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.833341    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:29.829223    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.829767    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.831335    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.831758    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:29.833341    9650 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:32.337665   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:32.348466   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:32.348524   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:32.374886   41166 cri.go:89] found id: ""
	I1009 18:30:32.374904   41166 logs.go:282] 0 containers: []
	W1009 18:30:32.374914   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:32.374922   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:32.374970   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:32.400529   41166 cri.go:89] found id: ""
	I1009 18:30:32.400545   41166 logs.go:282] 0 containers: []
	W1009 18:30:32.400554   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:32.400560   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:32.400613   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:32.426791   41166 cri.go:89] found id: ""
	I1009 18:30:32.426807   41166 logs.go:282] 0 containers: []
	W1009 18:30:32.426812   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:32.426817   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:32.426857   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:32.452312   41166 cri.go:89] found id: ""
	I1009 18:30:32.452327   41166 logs.go:282] 0 containers: []
	W1009 18:30:32.452332   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:32.452337   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:32.452418   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:32.477378   41166 cri.go:89] found id: ""
	I1009 18:30:32.477392   41166 logs.go:282] 0 containers: []
	W1009 18:30:32.477398   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:32.477402   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:32.477445   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:32.503118   41166 cri.go:89] found id: ""
	I1009 18:30:32.503131   41166 logs.go:282] 0 containers: []
	W1009 18:30:32.503154   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:32.503161   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:32.503204   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:32.528118   41166 cri.go:89] found id: ""
	I1009 18:30:32.528132   41166 logs.go:282] 0 containers: []
	W1009 18:30:32.528156   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:32.528165   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:32.528175   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:32.591877   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:32.591893   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:32.603816   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:32.603831   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:32.660681   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:32.653480    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.654399    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.655963    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.656383    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.657937    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:32.653480    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.654399    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.655963    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.656383    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:32.657937    9752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:32.660698   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:32.660707   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:32.720544   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:32.720563   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:35.252168   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:35.262910   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:35.262957   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:35.288174   41166 cri.go:89] found id: ""
	I1009 18:30:35.288191   41166 logs.go:282] 0 containers: []
	W1009 18:30:35.288199   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:35.288205   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:35.288262   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:35.313498   41166 cri.go:89] found id: ""
	I1009 18:30:35.313515   41166 logs.go:282] 0 containers: []
	W1009 18:30:35.313523   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:35.313529   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:35.313576   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:35.337926   41166 cri.go:89] found id: ""
	I1009 18:30:35.337942   41166 logs.go:282] 0 containers: []
	W1009 18:30:35.337950   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:35.337956   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:35.337998   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:35.364071   41166 cri.go:89] found id: ""
	I1009 18:30:35.364085   41166 logs.go:282] 0 containers: []
	W1009 18:30:35.364093   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:35.364100   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:35.364185   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:35.390353   41166 cri.go:89] found id: ""
	I1009 18:30:35.390367   41166 logs.go:282] 0 containers: []
	W1009 18:30:35.390373   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:35.390378   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:35.390419   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:35.416164   41166 cri.go:89] found id: ""
	I1009 18:30:35.416179   41166 logs.go:282] 0 containers: []
	W1009 18:30:35.416185   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:35.416190   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:35.416230   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:35.442115   41166 cri.go:89] found id: ""
	I1009 18:30:35.442131   41166 logs.go:282] 0 containers: []
	W1009 18:30:35.442152   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:35.442161   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:35.442172   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:35.512407   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:35.512424   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:35.524233   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:35.524246   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:35.581940   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:35.574890    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.575447    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.577004    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.577533    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.579108    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:35.574890    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.575447    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.577004    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.577533    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:35.579108    9871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:35.581954   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:35.581963   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:35.645796   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:35.645815   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:38.176188   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:38.187286   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:38.187337   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:38.213431   41166 cri.go:89] found id: ""
	I1009 18:30:38.213447   41166 logs.go:282] 0 containers: []
	W1009 18:30:38.213454   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:38.213458   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:38.213506   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:38.239289   41166 cri.go:89] found id: ""
	I1009 18:30:38.239305   41166 logs.go:282] 0 containers: []
	W1009 18:30:38.239313   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:38.239322   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:38.239375   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:38.266575   41166 cri.go:89] found id: ""
	I1009 18:30:38.266590   41166 logs.go:282] 0 containers: []
	W1009 18:30:38.266599   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:38.266604   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:38.266659   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:38.293047   41166 cri.go:89] found id: ""
	I1009 18:30:38.293062   41166 logs.go:282] 0 containers: []
	W1009 18:30:38.293071   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:38.293077   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:38.293132   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:38.321467   41166 cri.go:89] found id: ""
	I1009 18:30:38.321483   41166 logs.go:282] 0 containers: []
	W1009 18:30:38.321497   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:38.321503   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:38.321550   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:38.348227   41166 cri.go:89] found id: ""
	I1009 18:30:38.348251   41166 logs.go:282] 0 containers: []
	W1009 18:30:38.348259   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:38.348263   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:38.348306   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:38.374014   41166 cri.go:89] found id: ""
	I1009 18:30:38.374027   41166 logs.go:282] 0 containers: []
	W1009 18:30:38.374033   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:38.374039   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:38.374049   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:38.402788   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:38.402802   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:38.467775   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:38.467793   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:38.479120   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:38.479133   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:38.534788   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:38.527716   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.528266   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.529835   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.530310   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.531921   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:38.527716   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.528266   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.529835   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.530310   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:38.531921   10013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:38.534798   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:38.534808   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:41.097400   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:41.108281   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:41.108326   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:41.134432   41166 cri.go:89] found id: ""
	I1009 18:30:41.134448   41166 logs.go:282] 0 containers: []
	W1009 18:30:41.134456   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:41.134461   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:41.134502   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:41.160000   41166 cri.go:89] found id: ""
	I1009 18:30:41.160045   41166 logs.go:282] 0 containers: []
	W1009 18:30:41.160055   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:41.160071   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:41.160116   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:41.185957   41166 cri.go:89] found id: ""
	I1009 18:30:41.185971   41166 logs.go:282] 0 containers: []
	W1009 18:30:41.185979   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:41.185985   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:41.186046   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:41.212581   41166 cri.go:89] found id: ""
	I1009 18:30:41.212595   41166 logs.go:282] 0 containers: []
	W1009 18:30:41.212604   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:41.212611   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:41.212664   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:41.239537   41166 cri.go:89] found id: ""
	I1009 18:30:41.239550   41166 logs.go:282] 0 containers: []
	W1009 18:30:41.239556   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:41.239560   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:41.239603   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:41.264876   41166 cri.go:89] found id: ""
	I1009 18:30:41.264891   41166 logs.go:282] 0 containers: []
	W1009 18:30:41.264906   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:41.264915   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:41.264961   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:41.293949   41166 cri.go:89] found id: ""
	I1009 18:30:41.293962   41166 logs.go:282] 0 containers: []
	W1009 18:30:41.293968   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:41.293975   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:41.293985   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:41.306008   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:41.306023   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:41.363715   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:41.356554   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.357179   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.358764   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.359246   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.361018   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:41.356554   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.357179   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.358764   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.359246   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:41.361018   10118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:41.363727   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:41.363736   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:41.427974   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:41.427993   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:41.457063   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:41.457080   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:44.027395   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:44.038545   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:44.038600   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:44.065345   41166 cri.go:89] found id: ""
	I1009 18:30:44.065358   41166 logs.go:282] 0 containers: []
	W1009 18:30:44.065364   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:44.065369   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:44.065418   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:44.092543   41166 cri.go:89] found id: ""
	I1009 18:30:44.092558   41166 logs.go:282] 0 containers: []
	W1009 18:30:44.092572   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:44.092578   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:44.092628   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:44.117582   41166 cri.go:89] found id: ""
	I1009 18:30:44.117598   41166 logs.go:282] 0 containers: []
	W1009 18:30:44.117606   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:44.117612   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:44.117663   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:44.144537   41166 cri.go:89] found id: ""
	I1009 18:30:44.144554   41166 logs.go:282] 0 containers: []
	W1009 18:30:44.144563   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:44.144569   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:44.144630   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:44.170004   41166 cri.go:89] found id: ""
	I1009 18:30:44.170020   41166 logs.go:282] 0 containers: []
	W1009 18:30:44.170027   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:44.170032   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:44.170085   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:44.195566   41166 cri.go:89] found id: ""
	I1009 18:30:44.195581   41166 logs.go:282] 0 containers: []
	W1009 18:30:44.195587   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:44.195591   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:44.195638   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:44.221237   41166 cri.go:89] found id: ""
	I1009 18:30:44.221250   41166 logs.go:282] 0 containers: []
	W1009 18:30:44.221256   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:44.221264   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:44.221273   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:44.290040   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:44.290059   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:44.301528   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:44.301543   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:44.356883   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:44.350018   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.350577   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.352116   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.352527   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.353985   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:44.350018   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.350577   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.352116   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.352527   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:44.353985   10240 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:44.356892   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:44.356904   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:44.421203   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:44.421220   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:46.952072   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:46.962761   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:46.962852   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:46.988381   41166 cri.go:89] found id: ""
	I1009 18:30:46.988395   41166 logs.go:282] 0 containers: []
	W1009 18:30:46.988401   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:46.988406   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:46.988447   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:47.014123   41166 cri.go:89] found id: ""
	I1009 18:30:47.014151   41166 logs.go:282] 0 containers: []
	W1009 18:30:47.014161   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:47.014167   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:47.014223   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:47.040379   41166 cri.go:89] found id: ""
	I1009 18:30:47.040395   41166 logs.go:282] 0 containers: []
	W1009 18:30:47.040403   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:47.040409   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:47.040460   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:47.066430   41166 cri.go:89] found id: ""
	I1009 18:30:47.066444   41166 logs.go:282] 0 containers: []
	W1009 18:30:47.066450   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:47.066454   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:47.066495   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:47.092458   41166 cri.go:89] found id: ""
	I1009 18:30:47.092471   41166 logs.go:282] 0 containers: []
	W1009 18:30:47.092476   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:47.092481   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:47.092522   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:47.118558   41166 cri.go:89] found id: ""
	I1009 18:30:47.118574   41166 logs.go:282] 0 containers: []
	W1009 18:30:47.118582   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:47.118588   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:47.118639   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:47.143956   41166 cri.go:89] found id: ""
	I1009 18:30:47.143969   41166 logs.go:282] 0 containers: []
	W1009 18:30:47.143975   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:47.143983   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:47.143991   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:47.204921   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:47.204939   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:47.233955   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:47.233972   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:47.299659   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:47.299725   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:47.310930   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:47.310944   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:47.365782   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:47.358862   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.359473   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.361059   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.361558   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.363067   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:47.358862   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.359473   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.361059   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.361558   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:47.363067   10382 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:49.866821   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:49.877492   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:49.877546   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:49.902235   41166 cri.go:89] found id: ""
	I1009 18:30:49.902249   41166 logs.go:282] 0 containers: []
	W1009 18:30:49.902255   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:49.902260   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:49.902330   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:49.927833   41166 cri.go:89] found id: ""
	I1009 18:30:49.927848   41166 logs.go:282] 0 containers: []
	W1009 18:30:49.927855   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:49.927859   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:49.927914   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:49.952484   41166 cri.go:89] found id: ""
	I1009 18:30:49.952500   41166 logs.go:282] 0 containers: []
	W1009 18:30:49.952515   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:49.952525   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:49.952653   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:49.978974   41166 cri.go:89] found id: ""
	I1009 18:30:49.978989   41166 logs.go:282] 0 containers: []
	W1009 18:30:49.978997   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:49.979003   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:49.979055   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:50.003996   41166 cri.go:89] found id: ""
	I1009 18:30:50.004011   41166 logs.go:282] 0 containers: []
	W1009 18:30:50.004020   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:50.004026   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:50.004074   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:50.029201   41166 cri.go:89] found id: ""
	I1009 18:30:50.029213   41166 logs.go:282] 0 containers: []
	W1009 18:30:50.029220   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:50.029225   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:50.029285   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:50.055190   41166 cri.go:89] found id: ""
	I1009 18:30:50.055203   41166 logs.go:282] 0 containers: []
	W1009 18:30:50.055208   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:50.055215   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:50.055224   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:50.124075   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:50.124092   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:50.135918   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:50.135933   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:50.192425   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:50.185538   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.186038   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.187643   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.188060   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.189680   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:50.185538   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.186038   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.187643   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.188060   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:50.189680   10490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:50.192437   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:50.192450   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:50.252346   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:50.252364   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:52.781770   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:52.792376   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:52.792418   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:52.818902   41166 cri.go:89] found id: ""
	I1009 18:30:52.818916   41166 logs.go:282] 0 containers: []
	W1009 18:30:52.818922   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:52.818941   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:52.818984   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:52.844120   41166 cri.go:89] found id: ""
	I1009 18:30:52.844145   41166 logs.go:282] 0 containers: []
	W1009 18:30:52.844154   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:52.844160   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:52.844205   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:52.870228   41166 cri.go:89] found id: ""
	I1009 18:30:52.870242   41166 logs.go:282] 0 containers: []
	W1009 18:30:52.870254   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:52.870259   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:52.870305   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:52.896056   41166 cri.go:89] found id: ""
	I1009 18:30:52.896073   41166 logs.go:282] 0 containers: []
	W1009 18:30:52.896082   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:52.896089   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:52.896151   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:52.921111   41166 cri.go:89] found id: ""
	I1009 18:30:52.921126   41166 logs.go:282] 0 containers: []
	W1009 18:30:52.921145   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:52.921152   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:52.921198   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:52.947164   41166 cri.go:89] found id: ""
	I1009 18:30:52.947180   41166 logs.go:282] 0 containers: []
	W1009 18:30:52.947189   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:52.947194   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:52.947246   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:52.972398   41166 cri.go:89] found id: ""
	I1009 18:30:52.972412   41166 logs.go:282] 0 containers: []
	W1009 18:30:52.972419   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:52.972426   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:52.972441   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:53.041501   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:53.041519   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:53.053308   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:53.053324   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:53.109333   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:53.102407   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.102951   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.104551   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.104933   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.106568   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:53.102407   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.102951   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.104551   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.104933   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:53.106568   10619 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:53.109342   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:53.109351   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:53.168700   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:53.168718   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:55.699434   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:55.709814   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:55.709854   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:55.734822   41166 cri.go:89] found id: ""
	I1009 18:30:55.734841   41166 logs.go:282] 0 containers: []
	W1009 18:30:55.734851   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:55.734858   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:55.734916   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:55.759667   41166 cri.go:89] found id: ""
	I1009 18:30:55.759684   41166 logs.go:282] 0 containers: []
	W1009 18:30:55.759692   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:55.759698   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:55.759750   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:55.785789   41166 cri.go:89] found id: ""
	I1009 18:30:55.785805   41166 logs.go:282] 0 containers: []
	W1009 18:30:55.785813   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:55.785819   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:55.785872   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:55.810465   41166 cri.go:89] found id: ""
	I1009 18:30:55.810481   41166 logs.go:282] 0 containers: []
	W1009 18:30:55.810490   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:55.810496   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:55.810537   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:55.836067   41166 cri.go:89] found id: ""
	I1009 18:30:55.836080   41166 logs.go:282] 0 containers: []
	W1009 18:30:55.836086   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:55.836091   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:55.836131   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:55.860951   41166 cri.go:89] found id: ""
	I1009 18:30:55.860967   41166 logs.go:282] 0 containers: []
	W1009 18:30:55.860974   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:55.860978   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:55.861021   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:55.885761   41166 cri.go:89] found id: ""
	I1009 18:30:55.885775   41166 logs.go:282] 0 containers: []
	W1009 18:30:55.885781   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:55.885788   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:55.885797   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:30:55.915265   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:55.915280   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:55.981115   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:55.981146   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:55.993311   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:55.993328   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:56.050751   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:56.043889   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.044374   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.045969   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.046413   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.047907   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:56.043889   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.044374   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.045969   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.046413   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:56.047907   10752 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:56.050764   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:56.050774   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:58.612432   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:30:58.623245   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:30:58.623295   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:30:58.648116   41166 cri.go:89] found id: ""
	I1009 18:30:58.648129   41166 logs.go:282] 0 containers: []
	W1009 18:30:58.648149   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:30:58.648156   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:30:58.648209   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:30:58.674600   41166 cri.go:89] found id: ""
	I1009 18:30:58.674619   41166 logs.go:282] 0 containers: []
	W1009 18:30:58.674627   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:30:58.674634   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:30:58.674700   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:30:58.700636   41166 cri.go:89] found id: ""
	I1009 18:30:58.700649   41166 logs.go:282] 0 containers: []
	W1009 18:30:58.700655   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:30:58.700659   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:30:58.700701   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:30:58.725891   41166 cri.go:89] found id: ""
	I1009 18:30:58.725907   41166 logs.go:282] 0 containers: []
	W1009 18:30:58.725916   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:30:58.725922   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:30:58.725984   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:30:58.751493   41166 cri.go:89] found id: ""
	I1009 18:30:58.751509   41166 logs.go:282] 0 containers: []
	W1009 18:30:58.751517   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:30:58.751523   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:30:58.751565   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:30:58.776578   41166 cri.go:89] found id: ""
	I1009 18:30:58.776594   41166 logs.go:282] 0 containers: []
	W1009 18:30:58.776603   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:30:58.776609   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:30:58.776668   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:30:58.802746   41166 cri.go:89] found id: ""
	I1009 18:30:58.802759   41166 logs.go:282] 0 containers: []
	W1009 18:30:58.802765   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:30:58.802772   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:30:58.802780   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:30:58.871392   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:30:58.871409   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:30:58.883200   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:30:58.883216   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:30:58.939993   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:30:58.932935   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.933540   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.935122   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.935618   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.937106   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:30:58.932935   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.933540   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.935122   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.935618   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:30:58.937106   10858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:30:58.940010   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:30:58.940026   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:30:59.001043   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:30:59.001062   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:01.533754   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:01.544314   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:01.544360   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:01.570557   41166 cri.go:89] found id: ""
	I1009 18:31:01.570573   41166 logs.go:282] 0 containers: []
	W1009 18:31:01.570581   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:01.570587   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:01.570633   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:01.597498   41166 cri.go:89] found id: ""
	I1009 18:31:01.597512   41166 logs.go:282] 0 containers: []
	W1009 18:31:01.597518   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:01.597522   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:01.597562   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:01.624834   41166 cri.go:89] found id: ""
	I1009 18:31:01.624850   41166 logs.go:282] 0 containers: []
	W1009 18:31:01.624859   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:01.624865   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:01.624928   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:01.650834   41166 cri.go:89] found id: ""
	I1009 18:31:01.650849   41166 logs.go:282] 0 containers: []
	W1009 18:31:01.650858   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:01.650864   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:01.650902   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:01.676498   41166 cri.go:89] found id: ""
	I1009 18:31:01.676513   41166 logs.go:282] 0 containers: []
	W1009 18:31:01.676522   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:01.676530   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:01.676575   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:01.702274   41166 cri.go:89] found id: ""
	I1009 18:31:01.702288   41166 logs.go:282] 0 containers: []
	W1009 18:31:01.702299   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:01.702304   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:01.702359   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:01.727077   41166 cri.go:89] found id: ""
	I1009 18:31:01.727089   41166 logs.go:282] 0 containers: []
	W1009 18:31:01.727095   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:01.727102   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:01.727110   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:01.794867   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:01.794884   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:01.807132   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:01.807156   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:01.863186   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:01.856581   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.857195   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.858743   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.859211   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.860783   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:01.856581   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.857195   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.858743   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.859211   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:01.860783   10978 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:01.863194   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:01.863203   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:01.926319   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:01.926337   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:04.456429   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:04.467647   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:04.467697   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:04.494363   41166 cri.go:89] found id: ""
	I1009 18:31:04.494376   41166 logs.go:282] 0 containers: []
	W1009 18:31:04.494382   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:04.494386   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:04.494425   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:04.519597   41166 cri.go:89] found id: ""
	I1009 18:31:04.519613   41166 logs.go:282] 0 containers: []
	W1009 18:31:04.519622   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:04.519627   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:04.519673   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:04.544960   41166 cri.go:89] found id: ""
	I1009 18:31:04.544973   41166 logs.go:282] 0 containers: []
	W1009 18:31:04.544979   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:04.544983   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:04.545025   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:04.570312   41166 cri.go:89] found id: ""
	I1009 18:31:04.570326   41166 logs.go:282] 0 containers: []
	W1009 18:31:04.570331   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:04.570336   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:04.570376   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:04.598075   41166 cri.go:89] found id: ""
	I1009 18:31:04.598088   41166 logs.go:282] 0 containers: []
	W1009 18:31:04.598094   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:04.598098   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:04.598163   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:04.624439   41166 cri.go:89] found id: ""
	I1009 18:31:04.624452   41166 logs.go:282] 0 containers: []
	W1009 18:31:04.624458   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:04.624462   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:04.624501   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:04.650512   41166 cri.go:89] found id: ""
	I1009 18:31:04.650526   41166 logs.go:282] 0 containers: []
	W1009 18:31:04.650535   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:04.650542   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:04.650550   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:04.721753   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:04.721770   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:04.733512   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:04.733526   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:04.789859   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:04.782731   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.783273   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.784877   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.785331   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.786824   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:04.782731   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.783273   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.784877   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.785331   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:04.786824   11106 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:04.789871   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:04.789881   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:04.853995   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:04.854014   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:07.383979   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:07.395090   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:07.395190   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:07.421890   41166 cri.go:89] found id: ""
	I1009 18:31:07.421903   41166 logs.go:282] 0 containers: []
	W1009 18:31:07.421909   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:07.421914   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:07.421966   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:07.448060   41166 cri.go:89] found id: ""
	I1009 18:31:07.448073   41166 logs.go:282] 0 containers: []
	W1009 18:31:07.448079   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:07.448083   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:07.448124   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:07.474470   41166 cri.go:89] found id: ""
	I1009 18:31:07.474482   41166 logs.go:282] 0 containers: []
	W1009 18:31:07.474488   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:07.474493   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:07.474536   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:07.501777   41166 cri.go:89] found id: ""
	I1009 18:31:07.501793   41166 logs.go:282] 0 containers: []
	W1009 18:31:07.501802   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:07.501808   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:07.501851   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:07.527522   41166 cri.go:89] found id: ""
	I1009 18:31:07.527534   41166 logs.go:282] 0 containers: []
	W1009 18:31:07.527540   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:07.527545   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:07.527597   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:07.552279   41166 cri.go:89] found id: ""
	I1009 18:31:07.552294   41166 logs.go:282] 0 containers: []
	W1009 18:31:07.552302   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:07.552307   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:07.552346   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:07.576431   41166 cri.go:89] found id: ""
	I1009 18:31:07.576446   41166 logs.go:282] 0 containers: []
	W1009 18:31:07.576454   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:07.576462   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:07.576470   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:07.643680   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:07.643696   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:07.655497   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:07.655511   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:07.710565   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:07.703625   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.704548   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.706134   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.706591   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.708100   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:07.703625   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.704548   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.706134   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.706591   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:07.708100   11236 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:07.710581   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:07.710591   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:07.772201   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:07.772218   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:10.301414   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:10.312068   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:10.312119   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:10.336646   41166 cri.go:89] found id: ""
	I1009 18:31:10.336661   41166 logs.go:282] 0 containers: []
	W1009 18:31:10.336668   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:10.336672   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:10.336714   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:10.361765   41166 cri.go:89] found id: ""
	I1009 18:31:10.361779   41166 logs.go:282] 0 containers: []
	W1009 18:31:10.361788   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:10.361793   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:10.361849   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:10.386638   41166 cri.go:89] found id: ""
	I1009 18:31:10.386654   41166 logs.go:282] 0 containers: []
	W1009 18:31:10.386663   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:10.386669   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:10.386715   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:10.412340   41166 cri.go:89] found id: ""
	I1009 18:31:10.412353   41166 logs.go:282] 0 containers: []
	W1009 18:31:10.412359   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:10.412363   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:10.412402   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:10.437345   41166 cri.go:89] found id: ""
	I1009 18:31:10.437360   41166 logs.go:282] 0 containers: []
	W1009 18:31:10.437368   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:10.437372   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:10.437412   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:10.461775   41166 cri.go:89] found id: ""
	I1009 18:31:10.461790   41166 logs.go:282] 0 containers: []
	W1009 18:31:10.461797   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:10.461804   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:10.461851   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:10.486502   41166 cri.go:89] found id: ""
	I1009 18:31:10.486515   41166 logs.go:282] 0 containers: []
	W1009 18:31:10.486521   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:10.486528   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:10.486540   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:10.541525   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:10.534617   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.535191   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.536754   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.537206   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.538626   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:10.534617   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.535191   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.536754   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.537206   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:10.538626   11346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:10.541534   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:10.541543   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:10.605554   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:10.605573   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:10.633218   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:10.633233   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:10.698623   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:10.698640   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:13.212017   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:13.222887   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:13.222934   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:13.249527   41166 cri.go:89] found id: ""
	I1009 18:31:13.249545   41166 logs.go:282] 0 containers: []
	W1009 18:31:13.249553   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:13.249558   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:13.249613   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:13.276030   41166 cri.go:89] found id: ""
	I1009 18:31:13.276047   41166 logs.go:282] 0 containers: []
	W1009 18:31:13.276055   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:13.276062   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:13.276123   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:13.301696   41166 cri.go:89] found id: ""
	I1009 18:31:13.301712   41166 logs.go:282] 0 containers: []
	W1009 18:31:13.301722   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:13.301728   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:13.301779   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:13.327279   41166 cri.go:89] found id: ""
	I1009 18:31:13.327297   41166 logs.go:282] 0 containers: []
	W1009 18:31:13.327305   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:13.327314   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:13.327376   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:13.352370   41166 cri.go:89] found id: ""
	I1009 18:31:13.352387   41166 logs.go:282] 0 containers: []
	W1009 18:31:13.352396   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:13.352404   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:13.352455   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:13.376705   41166 cri.go:89] found id: ""
	I1009 18:31:13.376718   41166 logs.go:282] 0 containers: []
	W1009 18:31:13.376724   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:13.376728   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:13.376769   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:13.401874   41166 cri.go:89] found id: ""
	I1009 18:31:13.401887   41166 logs.go:282] 0 containers: []
	W1009 18:31:13.401893   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:13.401899   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:13.401908   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:13.468065   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:13.468083   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:13.479819   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:13.479833   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:13.536357   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:13.528543   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.529016   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.530652   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.532160   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.532602   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:13.528543   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.529016   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.530652   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.532160   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:13.532602   11470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:13.536371   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:13.536385   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:13.595534   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:13.595552   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:16.124813   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:16.135558   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:16.135630   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:16.161632   41166 cri.go:89] found id: ""
	I1009 18:31:16.161649   41166 logs.go:282] 0 containers: []
	W1009 18:31:16.161657   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:16.161662   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:16.161706   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:16.187466   41166 cri.go:89] found id: ""
	I1009 18:31:16.187480   41166 logs.go:282] 0 containers: []
	W1009 18:31:16.187486   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:16.187491   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:16.187532   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:16.214699   41166 cri.go:89] found id: ""
	I1009 18:31:16.214712   41166 logs.go:282] 0 containers: []
	W1009 18:31:16.214718   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:16.214722   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:16.214772   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:16.241600   41166 cri.go:89] found id: ""
	I1009 18:31:16.241617   41166 logs.go:282] 0 containers: []
	W1009 18:31:16.241622   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:16.241627   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:16.241670   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:16.266065   41166 cri.go:89] found id: ""
	I1009 18:31:16.266082   41166 logs.go:282] 0 containers: []
	W1009 18:31:16.266091   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:16.266097   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:16.266158   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:16.291053   41166 cri.go:89] found id: ""
	I1009 18:31:16.291067   41166 logs.go:282] 0 containers: []
	W1009 18:31:16.291073   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:16.291077   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:16.291123   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:16.316037   41166 cri.go:89] found id: ""
	I1009 18:31:16.316053   41166 logs.go:282] 0 containers: []
	W1009 18:31:16.316058   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:16.316065   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:16.316075   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:16.374518   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:16.374537   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:16.403805   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:16.403890   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:16.472344   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:16.472362   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:16.483905   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:16.483921   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:16.539056   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:16.532081   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.532735   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.534334   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.534743   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.536309   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:16.532081   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.532735   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.534334   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.534743   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:16.536309   11602 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:19.039513   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:19.050212   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:19.050255   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:19.074802   41166 cri.go:89] found id: ""
	I1009 18:31:19.074819   41166 logs.go:282] 0 containers: []
	W1009 18:31:19.074828   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:19.074834   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:19.074879   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:19.101554   41166 cri.go:89] found id: ""
	I1009 18:31:19.101568   41166 logs.go:282] 0 containers: []
	W1009 18:31:19.101574   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:19.101579   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:19.101618   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:19.126592   41166 cri.go:89] found id: ""
	I1009 18:31:19.126604   41166 logs.go:282] 0 containers: []
	W1009 18:31:19.126610   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:19.126614   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:19.126652   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:19.151096   41166 cri.go:89] found id: ""
	I1009 18:31:19.151108   41166 logs.go:282] 0 containers: []
	W1009 18:31:19.151117   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:19.151124   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:19.151179   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:19.175712   41166 cri.go:89] found id: ""
	I1009 18:31:19.175730   41166 logs.go:282] 0 containers: []
	W1009 18:31:19.175736   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:19.175740   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:19.175781   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:19.200064   41166 cri.go:89] found id: ""
	I1009 18:31:19.200080   41166 logs.go:282] 0 containers: []
	W1009 18:31:19.200088   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:19.200094   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:19.200161   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:19.227391   41166 cri.go:89] found id: ""
	I1009 18:31:19.227406   41166 logs.go:282] 0 containers: []
	W1009 18:31:19.227414   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:19.227424   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:19.227434   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:19.289413   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:19.289430   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:19.318081   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:19.318095   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:19.387739   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:19.387754   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:19.399028   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:19.399046   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:19.454538   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:19.447438   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.447971   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.449548   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.449995   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.451532   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:19.447438   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.447971   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.449548   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.449995   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:19.451532   11726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:21.956227   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:21.966936   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:21.966995   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:21.991378   41166 cri.go:89] found id: ""
	I1009 18:31:21.991391   41166 logs.go:282] 0 containers: []
	W1009 18:31:21.991397   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:21.991402   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:21.991440   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:22.016783   41166 cri.go:89] found id: ""
	I1009 18:31:22.016796   41166 logs.go:282] 0 containers: []
	W1009 18:31:22.016803   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:22.016808   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:22.016848   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:22.041987   41166 cri.go:89] found id: ""
	I1009 18:31:22.042003   41166 logs.go:282] 0 containers: []
	W1009 18:31:22.042012   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:22.042018   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:22.042068   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:22.067709   41166 cri.go:89] found id: ""
	I1009 18:31:22.067722   41166 logs.go:282] 0 containers: []
	W1009 18:31:22.067727   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:22.067735   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:22.067787   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:22.093654   41166 cri.go:89] found id: ""
	I1009 18:31:22.093666   41166 logs.go:282] 0 containers: []
	W1009 18:31:22.093671   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:22.093675   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:22.093718   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:22.119263   41166 cri.go:89] found id: ""
	I1009 18:31:22.119276   41166 logs.go:282] 0 containers: []
	W1009 18:31:22.119306   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:22.119310   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:22.119350   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:22.143920   41166 cri.go:89] found id: ""
	I1009 18:31:22.143933   41166 logs.go:282] 0 containers: []
	W1009 18:31:22.143939   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:22.143945   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:22.143954   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:22.172713   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:22.172727   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:22.241689   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:22.241717   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:22.253927   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:22.253942   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:22.308454   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:22.301618   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.302105   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.303689   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.304160   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.305712   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:22.301618   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.302105   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.303689   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.304160   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:22.305712   11847 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:22.308469   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:22.308483   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:24.874240   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:24.885199   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:24.885251   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:24.912332   41166 cri.go:89] found id: ""
	I1009 18:31:24.912355   41166 logs.go:282] 0 containers: []
	W1009 18:31:24.912363   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:24.912369   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:24.912510   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:24.938534   41166 cri.go:89] found id: ""
	I1009 18:31:24.938551   41166 logs.go:282] 0 containers: []
	W1009 18:31:24.938557   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:24.938564   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:24.938611   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:24.965113   41166 cri.go:89] found id: ""
	I1009 18:31:24.965125   41166 logs.go:282] 0 containers: []
	W1009 18:31:24.965131   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:24.965151   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:24.965204   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:24.991845   41166 cri.go:89] found id: ""
	I1009 18:31:24.991858   41166 logs.go:282] 0 containers: []
	W1009 18:31:24.991864   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:24.991868   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:24.991910   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:25.018693   41166 cri.go:89] found id: ""
	I1009 18:31:25.018706   41166 logs.go:282] 0 containers: []
	W1009 18:31:25.018711   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:25.018717   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:25.018756   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:25.044931   41166 cri.go:89] found id: ""
	I1009 18:31:25.044948   41166 logs.go:282] 0 containers: []
	W1009 18:31:25.044957   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:25.044963   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:25.045014   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:25.071449   41166 cri.go:89] found id: ""
	I1009 18:31:25.071465   41166 logs.go:282] 0 containers: []
	W1009 18:31:25.071474   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:25.071483   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:25.071495   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:25.138301   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:25.138320   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:25.150561   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:25.150575   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:25.208095   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:25.201000   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.201519   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.203190   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.203673   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.205213   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:25.201000   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.201519   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.203190   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.203673   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:25.205213   11950 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:25.208105   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:25.208114   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:25.272810   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:25.272829   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:27.804229   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:27.815074   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:27.815120   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:27.840171   41166 cri.go:89] found id: ""
	I1009 18:31:27.840188   41166 logs.go:282] 0 containers: []
	W1009 18:31:27.840196   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:27.840200   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:27.840274   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:27.866963   41166 cri.go:89] found id: ""
	I1009 18:31:27.866981   41166 logs.go:282] 0 containers: []
	W1009 18:31:27.866990   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:27.866996   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:27.867076   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:27.893152   41166 cri.go:89] found id: ""
	I1009 18:31:27.893169   41166 logs.go:282] 0 containers: []
	W1009 18:31:27.893177   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:27.893183   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:27.893235   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:27.920337   41166 cri.go:89] found id: ""
	I1009 18:31:27.920350   41166 logs.go:282] 0 containers: []
	W1009 18:31:27.920356   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:27.920361   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:27.920403   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:27.945940   41166 cri.go:89] found id: ""
	I1009 18:31:27.945956   41166 logs.go:282] 0 containers: []
	W1009 18:31:27.945964   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:27.945971   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:27.946036   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:27.971578   41166 cri.go:89] found id: ""
	I1009 18:31:27.971594   41166 logs.go:282] 0 containers: []
	W1009 18:31:27.971600   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:27.971604   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:27.971651   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:27.998876   41166 cri.go:89] found id: ""
	I1009 18:31:27.998890   41166 logs.go:282] 0 containers: []
	W1009 18:31:27.998898   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:27.998907   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:27.998919   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:28.060031   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:28.060050   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:28.090280   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:28.090294   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:28.155986   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:28.156004   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:28.167898   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:28.167912   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:28.224480   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:28.217373   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.217904   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.219580   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.219973   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.221548   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:28.217373   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.217904   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.219580   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.219973   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:28.221548   12093 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:30.726158   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:30.736658   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:30.736713   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:30.762096   41166 cri.go:89] found id: ""
	I1009 18:31:30.762111   41166 logs.go:282] 0 containers: []
	W1009 18:31:30.762119   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:30.762125   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:30.762193   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:30.787132   41166 cri.go:89] found id: ""
	I1009 18:31:30.787161   41166 logs.go:282] 0 containers: []
	W1009 18:31:30.787169   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:30.787175   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:30.787234   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:30.813496   41166 cri.go:89] found id: ""
	I1009 18:31:30.813510   41166 logs.go:282] 0 containers: []
	W1009 18:31:30.813515   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:30.813519   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:30.813558   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:30.838073   41166 cri.go:89] found id: ""
	I1009 18:31:30.838089   41166 logs.go:282] 0 containers: []
	W1009 18:31:30.838098   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:30.838104   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:30.838167   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:30.864286   41166 cri.go:89] found id: ""
	I1009 18:31:30.864301   41166 logs.go:282] 0 containers: []
	W1009 18:31:30.864307   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:30.864312   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:30.864353   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:30.890806   41166 cri.go:89] found id: ""
	I1009 18:31:30.890819   41166 logs.go:282] 0 containers: []
	W1009 18:31:30.890825   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:30.890830   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:30.890885   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:30.917461   41166 cri.go:89] found id: ""
	I1009 18:31:30.917474   41166 logs.go:282] 0 containers: []
	W1009 18:31:30.917480   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:30.917487   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:30.917496   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:30.947122   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:30.947157   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:31.013114   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:31.013130   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:31.025904   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:31.025924   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:31.081194   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:31.074116   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.074697   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.076284   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.076747   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.078298   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:31.074116   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.074697   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.076284   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.076747   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:31.078298   12214 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:31.081206   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:31.081217   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:33.641553   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:33.652051   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:33.652105   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:33.676453   41166 cri.go:89] found id: ""
	I1009 18:31:33.676467   41166 logs.go:282] 0 containers: []
	W1009 18:31:33.676473   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:33.676477   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:33.676517   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:33.701838   41166 cri.go:89] found id: ""
	I1009 18:31:33.701854   41166 logs.go:282] 0 containers: []
	W1009 18:31:33.701862   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:33.701868   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:33.701916   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:33.727771   41166 cri.go:89] found id: ""
	I1009 18:31:33.727787   41166 logs.go:282] 0 containers: []
	W1009 18:31:33.727794   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:33.727799   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:33.727839   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:33.753654   41166 cri.go:89] found id: ""
	I1009 18:31:33.753670   41166 logs.go:282] 0 containers: []
	W1009 18:31:33.753681   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:33.753686   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:33.753731   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:33.780405   41166 cri.go:89] found id: ""
	I1009 18:31:33.780421   41166 logs.go:282] 0 containers: []
	W1009 18:31:33.780430   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:33.780436   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:33.780477   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:33.807435   41166 cri.go:89] found id: ""
	I1009 18:31:33.807448   41166 logs.go:282] 0 containers: []
	W1009 18:31:33.807454   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:33.807458   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:33.807502   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:33.833608   41166 cri.go:89] found id: ""
	I1009 18:31:33.833625   41166 logs.go:282] 0 containers: []
	W1009 18:31:33.833633   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:33.833642   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:33.833655   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:33.900086   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:33.900106   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:33.912409   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:33.912429   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:33.968532   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:33.961720   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.962278   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.963911   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.964427   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.965875   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:33.961720   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.962278   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.963911   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.964427   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:33.965875   12323 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:33.968541   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:33.968551   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:34.031879   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:34.031899   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:36.563728   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:36.574356   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:36.574399   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:36.600194   41166 cri.go:89] found id: ""
	I1009 18:31:36.600209   41166 logs.go:282] 0 containers: []
	W1009 18:31:36.600217   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:36.600223   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:36.600284   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:36.626075   41166 cri.go:89] found id: ""
	I1009 18:31:36.626096   41166 logs.go:282] 0 containers: []
	W1009 18:31:36.626106   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:36.626111   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:36.626182   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:36.652078   41166 cri.go:89] found id: ""
	I1009 18:31:36.652098   41166 logs.go:282] 0 containers: []
	W1009 18:31:36.652104   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:36.652109   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:36.652170   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:36.677462   41166 cri.go:89] found id: ""
	I1009 18:31:36.677474   41166 logs.go:282] 0 containers: []
	W1009 18:31:36.677480   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:36.677484   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:36.677522   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:36.703778   41166 cri.go:89] found id: ""
	I1009 18:31:36.703793   41166 logs.go:282] 0 containers: []
	W1009 18:31:36.703801   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:36.703807   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:36.703856   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:36.729868   41166 cri.go:89] found id: ""
	I1009 18:31:36.729884   41166 logs.go:282] 0 containers: []
	W1009 18:31:36.729893   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:36.729899   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:36.729942   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:36.756775   41166 cri.go:89] found id: ""
	I1009 18:31:36.756787   41166 logs.go:282] 0 containers: []
	W1009 18:31:36.756793   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:36.756801   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:36.756810   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:36.826838   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:36.826854   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:36.838705   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:36.838718   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:36.894816   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:36.887889   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.888440   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.890010   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.890538   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.891994   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:36.887889   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.888440   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.890010   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.890538   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:36.891994   12445 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:36.894826   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:36.894838   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:36.959865   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:36.959882   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:39.490368   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:39.501284   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:39.501335   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:39.527003   41166 cri.go:89] found id: ""
	I1009 18:31:39.527016   41166 logs.go:282] 0 containers: []
	W1009 18:31:39.527022   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:39.527026   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:39.527071   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:39.553355   41166 cri.go:89] found id: ""
	I1009 18:31:39.553370   41166 logs.go:282] 0 containers: []
	W1009 18:31:39.553379   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:39.553384   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:39.553425   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:39.579105   41166 cri.go:89] found id: ""
	I1009 18:31:39.579121   41166 logs.go:282] 0 containers: []
	W1009 18:31:39.579128   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:39.579133   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:39.579203   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:39.604899   41166 cri.go:89] found id: ""
	I1009 18:31:39.604913   41166 logs.go:282] 0 containers: []
	W1009 18:31:39.604919   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:39.604928   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:39.604985   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:39.630635   41166 cri.go:89] found id: ""
	I1009 18:31:39.630647   41166 logs.go:282] 0 containers: []
	W1009 18:31:39.630653   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:39.630657   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:39.630701   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:39.656106   41166 cri.go:89] found id: ""
	I1009 18:31:39.656121   41166 logs.go:282] 0 containers: []
	W1009 18:31:39.656129   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:39.656148   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:39.656207   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:39.681655   41166 cri.go:89] found id: ""
	I1009 18:31:39.681667   41166 logs.go:282] 0 containers: []
	W1009 18:31:39.681673   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:39.681680   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:39.681688   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:39.744126   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:39.744152   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:39.772799   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:39.772812   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:39.844571   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:39.844590   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:39.856246   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:39.856263   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:39.911854   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:39.905117   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.905586   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.907188   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.907677   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.909231   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:39.905117   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.905586   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.907188   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.907677   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:39.909231   12582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:42.413528   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:42.424343   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:42.424407   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:42.450128   41166 cri.go:89] found id: ""
	I1009 18:31:42.450165   41166 logs.go:282] 0 containers: []
	W1009 18:31:42.450173   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:42.450180   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:42.450239   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:42.475946   41166 cri.go:89] found id: ""
	I1009 18:31:42.475961   41166 logs.go:282] 0 containers: []
	W1009 18:31:42.475970   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:42.475976   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:42.476031   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:42.502865   41166 cri.go:89] found id: ""
	I1009 18:31:42.502881   41166 logs.go:282] 0 containers: []
	W1009 18:31:42.502890   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:42.502896   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:42.502946   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:42.530798   41166 cri.go:89] found id: ""
	I1009 18:31:42.530814   41166 logs.go:282] 0 containers: []
	W1009 18:31:42.530823   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:42.530829   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:42.530879   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:42.556524   41166 cri.go:89] found id: ""
	I1009 18:31:42.556539   41166 logs.go:282] 0 containers: []
	W1009 18:31:42.556548   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:42.556554   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:42.556605   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:42.582936   41166 cri.go:89] found id: ""
	I1009 18:31:42.582953   41166 logs.go:282] 0 containers: []
	W1009 18:31:42.582961   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:42.582967   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:42.583055   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:42.609400   41166 cri.go:89] found id: ""
	I1009 18:31:42.609415   41166 logs.go:282] 0 containers: []
	W1009 18:31:42.609424   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:42.609433   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:42.609444   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:42.671451   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:42.671468   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:42.700813   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:42.700832   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:42.769841   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:42.769859   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:42.782244   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:42.782261   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:42.840011   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:42.832755   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.833376   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.834917   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.835376   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.836976   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:42.832755   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.833376   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.834917   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.835376   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:42.836976   12714 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:45.340705   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:45.350991   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:45.351034   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:45.375913   41166 cri.go:89] found id: ""
	I1009 18:31:45.375926   41166 logs.go:282] 0 containers: []
	W1009 18:31:45.375932   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:45.375936   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:45.375974   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:45.402366   41166 cri.go:89] found id: ""
	I1009 18:31:45.402380   41166 logs.go:282] 0 containers: []
	W1009 18:31:45.402386   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:45.402391   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:45.402432   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:45.428247   41166 cri.go:89] found id: ""
	I1009 18:31:45.428263   41166 logs.go:282] 0 containers: []
	W1009 18:31:45.428272   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:45.428278   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:45.428332   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:45.454072   41166 cri.go:89] found id: ""
	I1009 18:31:45.454087   41166 logs.go:282] 0 containers: []
	W1009 18:31:45.454094   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:45.454103   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:45.454173   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:45.479985   41166 cri.go:89] found id: ""
	I1009 18:31:45.480000   41166 logs.go:282] 0 containers: []
	W1009 18:31:45.480006   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:45.480012   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:45.480064   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:45.505956   41166 cri.go:89] found id: ""
	I1009 18:31:45.505972   41166 logs.go:282] 0 containers: []
	W1009 18:31:45.505980   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:45.505986   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:45.506041   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:45.530757   41166 cri.go:89] found id: ""
	I1009 18:31:45.530770   41166 logs.go:282] 0 containers: []
	W1009 18:31:45.530775   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:45.530782   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:45.530791   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:45.597676   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:45.597693   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:45.609290   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:45.609305   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:45.666583   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:45.659856   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.660431   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.661987   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.662451   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.663976   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:45.659856   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.660431   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.661987   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.662451   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:45.663976   12820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:45.666593   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:45.666604   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:45.730000   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:45.730018   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:48.259768   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:48.270482   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:48.270528   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:48.297438   41166 cri.go:89] found id: ""
	I1009 18:31:48.297454   41166 logs.go:282] 0 containers: []
	W1009 18:31:48.297462   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:48.297467   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:48.297510   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:48.323680   41166 cri.go:89] found id: ""
	I1009 18:31:48.323695   41166 logs.go:282] 0 containers: []
	W1009 18:31:48.323704   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:48.323710   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:48.323756   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:48.348422   41166 cri.go:89] found id: ""
	I1009 18:31:48.348437   41166 logs.go:282] 0 containers: []
	W1009 18:31:48.348445   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:48.348450   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:48.348507   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:48.373232   41166 cri.go:89] found id: ""
	I1009 18:31:48.373247   41166 logs.go:282] 0 containers: []
	W1009 18:31:48.373253   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:48.373263   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:48.373306   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:48.398755   41166 cri.go:89] found id: ""
	I1009 18:31:48.398770   41166 logs.go:282] 0 containers: []
	W1009 18:31:48.398776   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:48.398781   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:48.398822   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:48.423977   41166 cri.go:89] found id: ""
	I1009 18:31:48.423993   41166 logs.go:282] 0 containers: []
	W1009 18:31:48.423999   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:48.424004   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:48.424056   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:48.450473   41166 cri.go:89] found id: ""
	I1009 18:31:48.450486   41166 logs.go:282] 0 containers: []
	W1009 18:31:48.450492   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:48.450499   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:48.450510   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:48.461974   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:48.461997   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:48.519875   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:48.513250   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.513778   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.515240   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.515817   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.517350   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:48.513250   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.513778   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.515240   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.515817   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:48.517350   12936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:48.519884   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:48.519893   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:48.579801   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:48.579819   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:48.609008   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:48.609031   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:51.179735   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:51.190623   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:51.190689   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:51.215839   41166 cri.go:89] found id: ""
	I1009 18:31:51.215854   41166 logs.go:282] 0 containers: []
	W1009 18:31:51.215860   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:51.215866   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:51.215919   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:51.241754   41166 cri.go:89] found id: ""
	I1009 18:31:51.241771   41166 logs.go:282] 0 containers: []
	W1009 18:31:51.241781   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:51.241786   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:51.241834   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:51.269204   41166 cri.go:89] found id: ""
	I1009 18:31:51.269221   41166 logs.go:282] 0 containers: []
	W1009 18:31:51.269227   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:51.269233   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:51.269288   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:51.296498   41166 cri.go:89] found id: ""
	I1009 18:31:51.296514   41166 logs.go:282] 0 containers: []
	W1009 18:31:51.296522   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:51.296527   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:51.296573   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:51.323034   41166 cri.go:89] found id: ""
	I1009 18:31:51.323049   41166 logs.go:282] 0 containers: []
	W1009 18:31:51.323057   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:51.323063   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:51.323112   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:51.348104   41166 cri.go:89] found id: ""
	I1009 18:31:51.348119   41166 logs.go:282] 0 containers: []
	W1009 18:31:51.348125   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:51.348131   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:51.348199   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:51.374228   41166 cri.go:89] found id: ""
	I1009 18:31:51.374242   41166 logs.go:282] 0 containers: []
	W1009 18:31:51.374248   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:51.374255   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:51.374265   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:51.403810   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:51.403825   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:51.474611   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:51.474630   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:51.486750   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:51.486766   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:51.542637   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:51.535796   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.536370   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.537923   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.538394   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.539906   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:51.535796   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.536370   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.537923   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.538394   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:51.539906   13074 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:51.542656   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:51.542666   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:54.103184   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:54.114409   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:54.114455   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:54.140634   41166 cri.go:89] found id: ""
	I1009 18:31:54.140646   41166 logs.go:282] 0 containers: []
	W1009 18:31:54.140652   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:54.140656   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:54.140703   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:54.166896   41166 cri.go:89] found id: ""
	I1009 18:31:54.166911   41166 logs.go:282] 0 containers: []
	W1009 18:31:54.166918   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:54.166922   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:54.166962   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:54.193155   41166 cri.go:89] found id: ""
	I1009 18:31:54.193170   41166 logs.go:282] 0 containers: []
	W1009 18:31:54.193176   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:54.193181   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:54.193222   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:54.217754   41166 cri.go:89] found id: ""
	I1009 18:31:54.217767   41166 logs.go:282] 0 containers: []
	W1009 18:31:54.217772   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:54.217777   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:54.217819   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:54.243823   41166 cri.go:89] found id: ""
	I1009 18:31:54.243837   41166 logs.go:282] 0 containers: []
	W1009 18:31:54.243843   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:54.243848   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:54.243887   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:54.271827   41166 cri.go:89] found id: ""
	I1009 18:31:54.271841   41166 logs.go:282] 0 containers: []
	W1009 18:31:54.271847   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:54.271852   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:54.271895   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:54.297907   41166 cri.go:89] found id: ""
	I1009 18:31:54.297920   41166 logs.go:282] 0 containers: []
	W1009 18:31:54.297925   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:54.297932   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:54.297942   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:54.365493   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:54.365510   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:54.377258   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:54.377275   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:54.432221   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:54.425355   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.425907   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.427547   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.427972   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.429614   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:54.425355   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.425907   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.427547   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.427972   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:54.429614   13181 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:54.432234   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:54.432244   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:54.492172   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:54.492189   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:57.022444   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:57.033223   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:57.033285   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:57.059246   41166 cri.go:89] found id: ""
	I1009 18:31:57.059267   41166 logs.go:282] 0 containers: []
	W1009 18:31:57.059273   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:57.059277   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:57.059348   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:31:57.084187   41166 cri.go:89] found id: ""
	I1009 18:31:57.084199   41166 logs.go:282] 0 containers: []
	W1009 18:31:57.084205   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:31:57.084209   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:31:57.084250   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:31:57.109765   41166 cri.go:89] found id: ""
	I1009 18:31:57.109778   41166 logs.go:282] 0 containers: []
	W1009 18:31:57.109784   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:31:57.109788   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:31:57.109828   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:31:57.135796   41166 cri.go:89] found id: ""
	I1009 18:31:57.135809   41166 logs.go:282] 0 containers: []
	W1009 18:31:57.135817   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:31:57.135824   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:31:57.136027   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:31:57.162702   41166 cri.go:89] found id: ""
	I1009 18:31:57.162715   41166 logs.go:282] 0 containers: []
	W1009 18:31:57.162720   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:31:57.162724   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:31:57.162773   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:31:57.189575   41166 cri.go:89] found id: ""
	I1009 18:31:57.189588   41166 logs.go:282] 0 containers: []
	W1009 18:31:57.189594   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:31:57.189598   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:31:57.189639   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:31:57.214916   41166 cri.go:89] found id: ""
	I1009 18:31:57.214931   41166 logs.go:282] 0 containers: []
	W1009 18:31:57.214939   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:31:57.214946   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:31:57.214956   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:31:57.226333   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:31:57.226347   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:31:57.282176   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:31:57.275375   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.275847   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.277403   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.277780   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.279430   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:31:57.275375   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.275847   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.277403   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.277780   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:31:57.279430   13316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:31:57.282186   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:31:57.282196   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:31:57.341981   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:31:57.341999   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:31:57.372028   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:31:57.372043   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:31:59.940902   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:31:59.951810   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:31:59.951853   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:31:59.977888   41166 cri.go:89] found id: ""
	I1009 18:31:59.977902   41166 logs.go:282] 0 containers: []
	W1009 18:31:59.977908   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:31:59.977912   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:31:59.977977   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:32:00.004236   41166 cri.go:89] found id: ""
	I1009 18:32:00.004252   41166 logs.go:282] 0 containers: []
	W1009 18:32:00.004265   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:32:00.004293   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:32:00.004347   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:32:00.030808   41166 cri.go:89] found id: ""
	I1009 18:32:00.030826   41166 logs.go:282] 0 containers: []
	W1009 18:32:00.030836   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:32:00.030842   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:32:00.030895   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:32:00.056760   41166 cri.go:89] found id: ""
	I1009 18:32:00.056772   41166 logs.go:282] 0 containers: []
	W1009 18:32:00.056778   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:32:00.056782   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:32:00.056826   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:32:00.083048   41166 cri.go:89] found id: ""
	I1009 18:32:00.083062   41166 logs.go:282] 0 containers: []
	W1009 18:32:00.083068   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:32:00.083072   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:32:00.083116   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:32:00.109679   41166 cri.go:89] found id: ""
	I1009 18:32:00.109693   41166 logs.go:282] 0 containers: []
	W1009 18:32:00.109699   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:32:00.109704   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:32:00.109753   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:32:00.135808   41166 cri.go:89] found id: ""
	I1009 18:32:00.135820   41166 logs.go:282] 0 containers: []
	W1009 18:32:00.135826   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:32:00.135833   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:32:00.135841   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:32:00.192719   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:32:00.185431   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.185945   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.187601   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.188147   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.189704   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:32:00.185431   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.185945   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.187601   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.188147   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:00.189704   13431 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:32:00.192732   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:32:00.192744   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:32:00.253264   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:32:00.253287   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:32:00.283450   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:32:00.283463   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:32:00.350291   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:32:00.350309   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:32:02.863750   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:32:02.874396   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:32:02.874434   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:32:02.900500   41166 cri.go:89] found id: ""
	I1009 18:32:02.900513   41166 logs.go:282] 0 containers: []
	W1009 18:32:02.900519   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:32:02.900523   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:32:02.900563   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:32:02.926067   41166 cri.go:89] found id: ""
	I1009 18:32:02.926083   41166 logs.go:282] 0 containers: []
	W1009 18:32:02.926092   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:32:02.926099   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:32:02.926157   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:32:02.951112   41166 cri.go:89] found id: ""
	I1009 18:32:02.951127   41166 logs.go:282] 0 containers: []
	W1009 18:32:02.951147   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:32:02.951154   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:32:02.951202   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:32:02.976038   41166 cri.go:89] found id: ""
	I1009 18:32:02.976052   41166 logs.go:282] 0 containers: []
	W1009 18:32:02.976057   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:32:02.976062   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:32:02.976114   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:32:03.001712   41166 cri.go:89] found id: ""
	I1009 18:32:03.001724   41166 logs.go:282] 0 containers: []
	W1009 18:32:03.001730   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:32:03.001734   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:32:03.001773   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:32:03.028181   41166 cri.go:89] found id: ""
	I1009 18:32:03.028195   41166 logs.go:282] 0 containers: []
	W1009 18:32:03.028201   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:32:03.028205   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:32:03.028247   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:32:03.054529   41166 cri.go:89] found id: ""
	I1009 18:32:03.054541   41166 logs.go:282] 0 containers: []
	W1009 18:32:03.054547   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:32:03.054554   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:32:03.054565   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:32:03.122196   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:32:03.122214   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:32:03.133617   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:32:03.133633   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:32:03.189282   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:32:03.182610   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.183115   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.184674   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.185052   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.186556   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:32:03.182610   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.183115   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.184674   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.185052   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:03.186556   13553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:32:03.189291   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:32:03.189301   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:32:03.252856   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:32:03.252874   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:32:05.784812   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:32:05.795352   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:32:05.795402   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:32:05.820276   41166 cri.go:89] found id: ""
	I1009 18:32:05.820289   41166 logs.go:282] 0 containers: []
	W1009 18:32:05.820295   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:32:05.820300   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:32:05.820341   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:32:05.846395   41166 cri.go:89] found id: ""
	I1009 18:32:05.846408   41166 logs.go:282] 0 containers: []
	W1009 18:32:05.846414   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:32:05.846418   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:32:05.846469   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:32:05.872185   41166 cri.go:89] found id: ""
	I1009 18:32:05.872199   41166 logs.go:282] 0 containers: []
	W1009 18:32:05.872205   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:32:05.872209   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:32:05.872254   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:32:05.898231   41166 cri.go:89] found id: ""
	I1009 18:32:05.898251   41166 logs.go:282] 0 containers: []
	W1009 18:32:05.898257   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:32:05.898263   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:32:05.898303   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:32:05.923683   41166 cri.go:89] found id: ""
	I1009 18:32:05.923699   41166 logs.go:282] 0 containers: []
	W1009 18:32:05.923707   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:32:05.923712   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:32:05.923755   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:32:05.949168   41166 cri.go:89] found id: ""
	I1009 18:32:05.949183   41166 logs.go:282] 0 containers: []
	W1009 18:32:05.949188   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:32:05.949193   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:32:05.949236   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:32:05.975320   41166 cri.go:89] found id: ""
	I1009 18:32:05.975332   41166 logs.go:282] 0 containers: []
	W1009 18:32:05.975338   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:32:05.975344   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:32:05.975354   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:32:06.041809   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:32:06.041827   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:32:06.054016   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:32:06.054040   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:32:06.110078   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:32:06.103223   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.103767   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.105448   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.105875   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.107466   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:32:06.103223   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.103767   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.105448   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.105875   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:06.107466   13672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:32:06.110088   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:32:06.110097   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:32:06.172545   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:32:06.172564   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:32:08.701488   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:32:08.712540   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:32:08.712594   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:32:08.738583   41166 cri.go:89] found id: ""
	I1009 18:32:08.738601   41166 logs.go:282] 0 containers: []
	W1009 18:32:08.738608   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:32:08.738613   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:32:08.738654   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:32:08.764379   41166 cri.go:89] found id: ""
	I1009 18:32:08.764396   41166 logs.go:282] 0 containers: []
	W1009 18:32:08.764404   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:32:08.764412   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:32:08.764466   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:32:08.790325   41166 cri.go:89] found id: ""
	I1009 18:32:08.790351   41166 logs.go:282] 0 containers: []
	W1009 18:32:08.790360   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:32:08.790367   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:32:08.790417   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:32:08.816765   41166 cri.go:89] found id: ""
	I1009 18:32:08.816780   41166 logs.go:282] 0 containers: []
	W1009 18:32:08.816788   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:32:08.816792   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:32:08.816844   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:32:08.842038   41166 cri.go:89] found id: ""
	I1009 18:32:08.842050   41166 logs.go:282] 0 containers: []
	W1009 18:32:08.842055   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:32:08.842060   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:32:08.842119   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:32:08.868221   41166 cri.go:89] found id: ""
	I1009 18:32:08.868236   41166 logs.go:282] 0 containers: []
	W1009 18:32:08.868243   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:32:08.868248   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:32:08.868291   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:32:08.894780   41166 cri.go:89] found id: ""
	I1009 18:32:08.894797   41166 logs.go:282] 0 containers: []
	W1009 18:32:08.894804   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:32:08.894810   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:32:08.894820   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:32:08.952094   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:32:08.944952   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.945523   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.947209   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.947687   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.949320   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:32:08.944952   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.945523   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.947209   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.947687   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:08.949320   13796 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:32:08.952107   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:32:08.952121   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:32:09.012751   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:32:09.012769   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:32:09.042946   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:32:09.042958   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:32:09.111059   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:32:09.111076   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:32:11.624407   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:32:11.635246   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:32:11.635303   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:32:11.661128   41166 cri.go:89] found id: ""
	I1009 18:32:11.661159   41166 logs.go:282] 0 containers: []
	W1009 18:32:11.661167   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:32:11.661173   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:32:11.661225   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:32:11.685846   41166 cri.go:89] found id: ""
	I1009 18:32:11.685860   41166 logs.go:282] 0 containers: []
	W1009 18:32:11.685866   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:32:11.685870   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:32:11.685909   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:32:11.711700   41166 cri.go:89] found id: ""
	I1009 18:32:11.711714   41166 logs.go:282] 0 containers: []
	W1009 18:32:11.711719   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:32:11.711723   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:32:11.711770   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:32:11.737208   41166 cri.go:89] found id: ""
	I1009 18:32:11.737220   41166 logs.go:282] 0 containers: []
	W1009 18:32:11.737225   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:32:11.737230   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:32:11.737278   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:32:11.762359   41166 cri.go:89] found id: ""
	I1009 18:32:11.762370   41166 logs.go:282] 0 containers: []
	W1009 18:32:11.762376   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:32:11.762380   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:32:11.762430   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:32:11.787996   41166 cri.go:89] found id: ""
	I1009 18:32:11.788011   41166 logs.go:282] 0 containers: []
	W1009 18:32:11.788019   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:32:11.788024   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:32:11.788084   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:32:11.812657   41166 cri.go:89] found id: ""
	I1009 18:32:11.812671   41166 logs.go:282] 0 containers: []
	W1009 18:32:11.812677   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:32:11.812685   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:32:11.812694   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:32:11.879681   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:32:11.879697   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:32:11.891109   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:32:11.891124   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:32:11.947646   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:32:11.940720   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.941253   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.942799   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.943257   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.944825   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:32:11.940720   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.941253   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.942799   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.943257   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:11.944825   13939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:32:11.947659   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:32:11.947672   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:32:12.013733   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:32:12.013750   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:32:14.545559   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:32:14.556586   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:32:14.556634   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:32:14.584233   41166 cri.go:89] found id: ""
	I1009 18:32:14.584250   41166 logs.go:282] 0 containers: []
	W1009 18:32:14.584258   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:32:14.584263   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:32:14.584312   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:32:14.610477   41166 cri.go:89] found id: ""
	I1009 18:32:14.610493   41166 logs.go:282] 0 containers: []
	W1009 18:32:14.610500   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:32:14.610505   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:32:14.610560   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:32:14.635807   41166 cri.go:89] found id: ""
	I1009 18:32:14.635824   41166 logs.go:282] 0 containers: []
	W1009 18:32:14.635832   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:32:14.635837   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:32:14.635880   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:32:14.661016   41166 cri.go:89] found id: ""
	I1009 18:32:14.661034   41166 logs.go:282] 0 containers: []
	W1009 18:32:14.661043   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:32:14.661049   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:32:14.661098   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:32:14.689198   41166 cri.go:89] found id: ""
	I1009 18:32:14.689212   41166 logs.go:282] 0 containers: []
	W1009 18:32:14.689217   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:32:14.689223   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:32:14.689278   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:32:14.714892   41166 cri.go:89] found id: ""
	I1009 18:32:14.714908   41166 logs.go:282] 0 containers: []
	W1009 18:32:14.714917   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:32:14.714923   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:32:14.714971   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:32:14.740412   41166 cri.go:89] found id: ""
	I1009 18:32:14.740425   41166 logs.go:282] 0 containers: []
	W1009 18:32:14.740433   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:32:14.740440   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:32:14.740449   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:32:14.803421   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:32:14.803439   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:32:14.831580   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:32:14.831594   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:32:14.901628   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:32:14.901653   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:32:14.914304   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:32:14.914326   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:32:14.971146   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:32:14.964264   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.964764   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.966352   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.966731   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.968402   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:32:14.964264   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.964764   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.966352   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.966731   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:14.968402   14073 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:32:17.472817   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:32:17.483574   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:32:17.483619   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:32:17.510868   41166 cri.go:89] found id: ""
	I1009 18:32:17.510882   41166 logs.go:282] 0 containers: []
	W1009 18:32:17.510891   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:32:17.510896   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:32:17.510956   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:32:17.537306   41166 cri.go:89] found id: ""
	I1009 18:32:17.537319   41166 logs.go:282] 0 containers: []
	W1009 18:32:17.537325   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:32:17.537329   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:32:17.537372   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:32:17.564957   41166 cri.go:89] found id: ""
	I1009 18:32:17.564972   41166 logs.go:282] 0 containers: []
	W1009 18:32:17.564978   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:32:17.564984   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:32:17.565039   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:32:17.591401   41166 cri.go:89] found id: ""
	I1009 18:32:17.591418   41166 logs.go:282] 0 containers: []
	W1009 18:32:17.591425   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:32:17.591430   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:32:17.591476   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:32:17.617237   41166 cri.go:89] found id: ""
	I1009 18:32:17.617250   41166 logs.go:282] 0 containers: []
	W1009 18:32:17.617256   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:32:17.617260   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:32:17.617302   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:32:17.642328   41166 cri.go:89] found id: ""
	I1009 18:32:17.642342   41166 logs.go:282] 0 containers: []
	W1009 18:32:17.642348   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:32:17.642352   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:32:17.642400   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:32:17.668302   41166 cri.go:89] found id: ""
	I1009 18:32:17.668315   41166 logs.go:282] 0 containers: []
	W1009 18:32:17.668321   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:32:17.668327   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:32:17.668336   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:32:17.679448   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:32:17.679463   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:32:17.736174   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:32:17.728959   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.729672   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.731395   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.731844   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.733446   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:32:17.728959   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.729672   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.731395   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.731844   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:32:17.733446   14176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:32:17.736227   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:32:17.736236   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:32:17.795423   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:32:17.795442   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:32:17.824553   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:32:17.824567   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:32:20.394282   41166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:32:20.405003   41166 kubeadm.go:601] duration metric: took 4m2.649024916s to restartPrimaryControlPlane
	W1009 18:32:20.405078   41166 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1009 18:32:20.405162   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 18:32:20.850567   41166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:32:20.863734   41166 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:32:20.872360   41166 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:32:20.872401   41166 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:32:20.880727   41166 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:32:20.880752   41166 kubeadm.go:157] found existing configuration files:
	
	I1009 18:32:20.880802   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1009 18:32:20.888758   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:32:20.888797   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:32:20.896370   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1009 18:32:20.904128   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:32:20.904188   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:32:20.911725   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1009 18:32:20.919740   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:32:20.919783   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:32:20.927592   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1009 18:32:20.935300   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:32:20.935348   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:32:20.942573   41166 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:32:20.998838   41166 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:32:21.055610   41166 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:36:23.829821   41166 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1009 18:36:23.829939   41166 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:36:23.832833   41166 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:36:23.832899   41166 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:36:23.833001   41166 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:36:23.833078   41166 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:36:23.833131   41166 kubeadm.go:318] OS: Linux
	I1009 18:36:23.833211   41166 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:36:23.833255   41166 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:36:23.833293   41166 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:36:23.833332   41166 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:36:23.833371   41166 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:36:23.833408   41166 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:36:23.833452   41166 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:36:23.833487   41166 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:36:23.833563   41166 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:36:23.833644   41166 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:36:23.833715   41166 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:36:23.833763   41166 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:36:23.836738   41166 out.go:252]   - Generating certificates and keys ...
	I1009 18:36:23.836809   41166 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:36:23.836876   41166 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:36:23.836946   41166 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 18:36:23.836995   41166 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 18:36:23.837054   41166 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 18:36:23.837106   41166 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 18:36:23.837180   41166 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 18:36:23.837230   41166 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 18:36:23.837295   41166 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 18:36:23.837361   41166 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 18:36:23.837391   41166 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 18:36:23.837444   41166 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:36:23.837485   41166 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:36:23.837544   41166 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:36:23.837590   41166 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:36:23.837644   41166 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:36:23.837687   41166 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:36:23.837754   41166 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:36:23.837807   41166 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:36:23.840574   41166 out.go:252]   - Booting up control plane ...
	I1009 18:36:23.840651   41166 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:36:23.840709   41166 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:36:23.840759   41166 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:36:23.840847   41166 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:36:23.840933   41166 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:36:23.841023   41166 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:36:23.841122   41166 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:36:23.841176   41166 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:36:23.841286   41166 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:36:23.841382   41166 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:36:23.841430   41166 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500920961s
	I1009 18:36:23.841508   41166 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:36:23.841575   41166 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1009 18:36:23.841650   41166 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:36:23.841721   41166 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:36:23.841779   41166 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000193088s
	I1009 18:36:23.841844   41166 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000216272s
	I1009 18:36:23.841921   41166 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000612564s
	I1009 18:36:23.841927   41166 kubeadm.go:318] 
	I1009 18:36:23.842001   41166 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:36:23.842071   41166 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:36:23.842160   41166 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:36:23.842237   41166 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:36:23.842297   41166 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:36:23.842366   41166 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:36:23.842394   41166 kubeadm.go:318] 
	W1009 18:36:23.842478   41166 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.500920961s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000193088s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000216272s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000612564s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 18:36:23.842555   41166 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 18:36:24.285465   41166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:36:24.298222   41166 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:36:24.298276   41166 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:36:24.306625   41166 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:36:24.306635   41166 kubeadm.go:157] found existing configuration files:
	
	I1009 18:36:24.306675   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1009 18:36:24.314710   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:36:24.314750   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:36:24.322418   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1009 18:36:24.330123   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:36:24.330187   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:36:24.337953   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1009 18:36:24.346125   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:36:24.346179   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:36:24.354153   41166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1009 18:36:24.362094   41166 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:36:24.362133   41166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:36:24.369784   41166 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:36:24.426834   41166 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:36:24.485641   41166 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:40:27.797583   41166 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 18:40:27.797662   41166 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:40:27.800620   41166 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:40:27.800659   41166 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:40:27.800736   41166 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:40:27.800783   41166 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:40:27.800811   41166 kubeadm.go:318] OS: Linux
	I1009 18:40:27.800847   41166 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:40:27.800885   41166 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:40:27.800924   41166 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:40:27.800985   41166 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:40:27.801052   41166 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:40:27.801090   41166 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:40:27.801156   41166 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:40:27.801201   41166 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:40:27.801265   41166 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:40:27.801343   41166 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:40:27.801412   41166 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:40:27.801484   41166 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:40:27.805055   41166 out.go:252]   - Generating certificates and keys ...
	I1009 18:40:27.805120   41166 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:40:27.805218   41166 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:40:27.805293   41166 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 18:40:27.805339   41166 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 18:40:27.805412   41166 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 18:40:27.805457   41166 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 18:40:27.805510   41166 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 18:40:27.805564   41166 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 18:40:27.805620   41166 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 18:40:27.805693   41166 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 18:40:27.805748   41166 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 18:40:27.805808   41166 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:40:27.805852   41166 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:40:27.805907   41166 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:40:27.805950   41166 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:40:27.805998   41166 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:40:27.806045   41166 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:40:27.806113   41166 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:40:27.806212   41166 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:40:27.807603   41166 out.go:252]   - Booting up control plane ...
	I1009 18:40:27.807673   41166 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:40:27.807748   41166 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:40:27.807805   41166 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:40:27.807888   41166 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:40:27.807967   41166 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:40:27.808054   41166 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:40:27.808118   41166 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:40:27.808182   41166 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:40:27.808282   41166 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:40:27.808373   41166 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:40:27.808424   41166 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000969803s
	I1009 18:40:27.808512   41166 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:40:27.808585   41166 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1009 18:40:27.808667   41166 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:40:27.808740   41166 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:40:27.808798   41166 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000410729s
	I1009 18:40:27.808855   41166 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000637307s
	I1009 18:40:27.808919   41166 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000528535s
	I1009 18:40:27.808921   41166 kubeadm.go:318] 
	I1009 18:40:27.808989   41166 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:40:27.809052   41166 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:40:27.809124   41166 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:40:27.809239   41166 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:40:27.809297   41166 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:40:27.809386   41166 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:40:27.809399   41166 kubeadm.go:318] 
	I1009 18:40:27.809438   41166 kubeadm.go:402] duration metric: took 12m10.090749097s to StartCluster
	I1009 18:40:27.809468   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:40:27.809513   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:40:27.837743   41166 cri.go:89] found id: ""
	I1009 18:40:27.837757   41166 logs.go:282] 0 containers: []
	W1009 18:40:27.837763   41166 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:40:27.837768   41166 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:40:27.837814   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:40:27.863718   41166 cri.go:89] found id: ""
	I1009 18:40:27.863732   41166 logs.go:282] 0 containers: []
	W1009 18:40:27.863738   41166 logs.go:284] No container was found matching "etcd"
	I1009 18:40:27.863748   41166 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:40:27.863792   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:40:27.889900   41166 cri.go:89] found id: ""
	I1009 18:40:27.889914   41166 logs.go:282] 0 containers: []
	W1009 18:40:27.889920   41166 logs.go:284] No container was found matching "coredns"
	I1009 18:40:27.889924   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:40:27.889980   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:40:27.916941   41166 cri.go:89] found id: ""
	I1009 18:40:27.916954   41166 logs.go:282] 0 containers: []
	W1009 18:40:27.916960   41166 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:40:27.916965   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:40:27.917024   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:40:27.943791   41166 cri.go:89] found id: ""
	I1009 18:40:27.943804   41166 logs.go:282] 0 containers: []
	W1009 18:40:27.943809   41166 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:40:27.943814   41166 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:40:27.943860   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:40:27.970612   41166 cri.go:89] found id: ""
	I1009 18:40:27.970625   41166 logs.go:282] 0 containers: []
	W1009 18:40:27.970631   41166 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:40:27.970635   41166 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:40:27.970683   41166 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:40:27.997688   41166 cri.go:89] found id: ""
	I1009 18:40:27.997700   41166 logs.go:282] 0 containers: []
	W1009 18:40:27.997706   41166 logs.go:284] No container was found matching "kindnet"
	I1009 18:40:27.997713   41166 logs.go:123] Gathering logs for kubelet ...
	I1009 18:40:27.997721   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:40:28.064711   41166 logs.go:123] Gathering logs for dmesg ...
	I1009 18:40:28.064730   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:40:28.076960   41166 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:40:28.076978   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:40:28.135195   41166 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:40:28.128400   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.128940   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.130597   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.131014   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.132350   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:40:28.128400   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.128940   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.130597   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.131014   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:28.132350   15516 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:40:28.135206   41166 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:40:28.135216   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:40:28.194198   41166 logs.go:123] Gathering logs for container status ...
	I1009 18:40:28.194216   41166 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 18:40:28.224308   41166 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000969803s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000410729s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000637307s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000528535s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 18:40:28.224355   41166 out.go:285] * 
	W1009 18:40:28.224482   41166 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000969803s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000410729s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000637307s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000528535s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:40:28.224505   41166 out.go:285] * 
	W1009 18:40:28.226335   41166 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:40:28.230950   41166 out.go:203] 
	W1009 18:40:28.232526   41166 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000969803s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000410729s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000637307s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000528535s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:40:28.232549   41166 out.go:285] * 
	I1009 18:40:28.235189   41166 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.564114864Z" level=info msg="createCtr: deleting container d0f3203170f1bf851cc5c3e7e264334abf2f4f7569a6b5394a7218431338d323 from storage" id=8c417e3f-7b5d-44f6-8082-13c142c8285b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.56610003Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-753440_kube-system_c3332277da3037b9d30e61510b9fdccb_0" id=1ded6b43-d118-4b70-8e5b-dd4aabd427f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:27 functional-753440 crio[5806]: time="2025-10-09T18:40:27.566508491Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-753440_kube-system_0d946ec5c615de29dae011722e300735_0" id=8c417e3f-7b5d-44f6-8082-13c142c8285b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:29 functional-753440 crio[5806]: time="2025-10-09T18:40:29.536705355Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=13df285b-7387-4f01-937e-611c409808fa name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:29 functional-753440 crio[5806]: time="2025-10-09T18:40:29.537772337Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=b2ef6457-a8de-44bf-9645-e025765a3571 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:29 functional-753440 crio[5806]: time="2025-10-09T18:40:29.538868775Z" level=info msg="Creating container: kube-system/etcd-functional-753440/etcd" id=38ca3084-3e46-45a5-bcc8-36519726e888 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:29 functional-753440 crio[5806]: time="2025-10-09T18:40:29.539098973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:29 functional-753440 crio[5806]: time="2025-10-09T18:40:29.54282272Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:29 functional-753440 crio[5806]: time="2025-10-09T18:40:29.54340808Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:29 functional-753440 crio[5806]: time="2025-10-09T18:40:29.558070772Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=38ca3084-3e46-45a5-bcc8-36519726e888 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:29 functional-753440 crio[5806]: time="2025-10-09T18:40:29.559965846Z" level=info msg="createCtr: deleting container ID a06ac9363965b653d64f09237aa7b9409e3fbd97a9719eef8873b5e27c9a2291 from idIndex" id=38ca3084-3e46-45a5-bcc8-36519726e888 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:29 functional-753440 crio[5806]: time="2025-10-09T18:40:29.56001007Z" level=info msg="createCtr: removing container a06ac9363965b653d64f09237aa7b9409e3fbd97a9719eef8873b5e27c9a2291" id=38ca3084-3e46-45a5-bcc8-36519726e888 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:29 functional-753440 crio[5806]: time="2025-10-09T18:40:29.560045273Z" level=info msg="createCtr: deleting container a06ac9363965b653d64f09237aa7b9409e3fbd97a9719eef8873b5e27c9a2291 from storage" id=38ca3084-3e46-45a5-bcc8-36519726e888 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:29 functional-753440 crio[5806]: time="2025-10-09T18:40:29.562455923Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-753440_kube-system_894f77eb6f96f2cc2bf4bdca611e7cdb_0" id=38ca3084-3e46-45a5-bcc8-36519726e888 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:31 functional-753440 crio[5806]: time="2025-10-09T18:40:31.536482041Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=ab7fe81f-8ca6-4783-97fa-1f8f5b5b69b6 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:31 functional-753440 crio[5806]: time="2025-10-09T18:40:31.537585954Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=c9a14339-cbc8-4d33-a435-b9d963fbc47c name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:31 functional-753440 crio[5806]: time="2025-10-09T18:40:31.538722204Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-753440/kube-controller-manager" id=ee5871e3-ac61-4e86-9eb0-6b504f80e66a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:31 functional-753440 crio[5806]: time="2025-10-09T18:40:31.538998993Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:31 functional-753440 crio[5806]: time="2025-10-09T18:40:31.543561387Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:31 functional-753440 crio[5806]: time="2025-10-09T18:40:31.544174518Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:31 functional-753440 crio[5806]: time="2025-10-09T18:40:31.560337135Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ee5871e3-ac61-4e86-9eb0-6b504f80e66a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:31 functional-753440 crio[5806]: time="2025-10-09T18:40:31.561844887Z" level=info msg="createCtr: deleting container ID b2f541e56cb88cf290e567f92b134c3f0309e932679af93777171378d1d056b3 from idIndex" id=ee5871e3-ac61-4e86-9eb0-6b504f80e66a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:31 functional-753440 crio[5806]: time="2025-10-09T18:40:31.561898246Z" level=info msg="createCtr: removing container b2f541e56cb88cf290e567f92b134c3f0309e932679af93777171378d1d056b3" id=ee5871e3-ac61-4e86-9eb0-6b504f80e66a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:31 functional-753440 crio[5806]: time="2025-10-09T18:40:31.561965515Z" level=info msg="createCtr: deleting container b2f541e56cb88cf290e567f92b134c3f0309e932679af93777171378d1d056b3 from storage" id=ee5871e3-ac61-4e86-9eb0-6b504f80e66a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:31 functional-753440 crio[5806]: time="2025-10-09T18:40:31.564636874Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-753440_kube-system_ddd5b817e547272bbbe5e6f0c16b8e98_0" id=ee5871e3-ac61-4e86-9eb0-6b504f80e66a name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:40:37.379190   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:37.379831   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:37.381650   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:37.382123   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:37.383425   16534 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:40:37 up  1:23,  0 user,  load average: 0.35, 0.11, 0.09
	Linux functional-753440 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 18:40:27 functional-753440 kubelet[14909]: E1009 18:40:27.566838   14909 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:40:27 functional-753440 kubelet[14909]:         container kube-apiserver start failed in pod kube-apiserver-functional-753440_kube-system(0d946ec5c615de29dae011722e300735): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:27 functional-753440 kubelet[14909]:  > logger="UnhandledError"
	Oct 09 18:40:27 functional-753440 kubelet[14909]: E1009 18:40:27.567563   14909 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-753440" podUID="0d946ec5c615de29dae011722e300735"
	Oct 09 18:40:28 functional-753440 kubelet[14909]: E1009 18:40:28.847450   14909 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 09 18:40:29 functional-753440 kubelet[14909]: E1009 18:40:29.536187   14909 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753440\" not found" node="functional-753440"
	Oct 09 18:40:29 functional-753440 kubelet[14909]: E1009 18:40:29.564042   14909 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:40:29 functional-753440 kubelet[14909]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:29 functional-753440 kubelet[14909]:  > podSandboxID="7e16b1bb2bf2df093cc66fa197bd5344740cdfe9b099dcd26ba3fc1c3435b769"
	Oct 09 18:40:29 functional-753440 kubelet[14909]: E1009 18:40:29.564174   14909 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:40:29 functional-753440 kubelet[14909]:         container etcd start failed in pod etcd-functional-753440_kube-system(894f77eb6f96f2cc2bf4bdca611e7cdb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:29 functional-753440 kubelet[14909]:  > logger="UnhandledError"
	Oct 09 18:40:29 functional-753440 kubelet[14909]: E1009 18:40:29.564212   14909 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-753440" podUID="894f77eb6f96f2cc2bf4bdca611e7cdb"
	Oct 09 18:40:31 functional-753440 kubelet[14909]: E1009 18:40:31.159164   14909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-753440?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 09 18:40:31 functional-753440 kubelet[14909]: I1009 18:40:31.315674   14909 kubelet_node_status.go:75] "Attempting to register node" node="functional-753440"
	Oct 09 18:40:31 functional-753440 kubelet[14909]: E1009 18:40:31.316034   14909 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-753440"
	Oct 09 18:40:31 functional-753440 kubelet[14909]: E1009 18:40:31.344233   14909 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-753440.186ce67effdfc72b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-753440,UID:functional-753440,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-753440 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-753440,},FirstTimestamp:2025-10-09 18:36:27.528144683 +0000 UTC m=+0.734831963,LastTimestamp:2025-10-09 18:36:27.528144683 +0000 UTC m=+0.734831963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-753440,}"
	Oct 09 18:40:31 functional-753440 kubelet[14909]: E1009 18:40:31.535991   14909 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753440\" not found" node="functional-753440"
	Oct 09 18:40:31 functional-753440 kubelet[14909]: E1009 18:40:31.564978   14909 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:40:31 functional-753440 kubelet[14909]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:31 functional-753440 kubelet[14909]:  > podSandboxID="fb34d4f739975f6378a39e225741fb0e80fac36aeda99b2080b81999ee15d221"
	Oct 09 18:40:31 functional-753440 kubelet[14909]: E1009 18:40:31.565115   14909 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:40:31 functional-753440 kubelet[14909]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-753440_kube-system(ddd5b817e547272bbbe5e6f0c16b8e98): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:31 functional-753440 kubelet[14909]:  > logger="UnhandledError"
	Oct 09 18:40:31 functional-753440 kubelet[14909]: E1009 18:40:31.565167   14909 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-753440" podUID="ddd5b817e547272bbbe5e6f0c16b8e98"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753440 -n functional-753440
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753440 -n functional-753440: exit status 2 (321.760892ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753440" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (241.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1009 18:40:45.056966   14880 retry.go:31] will retry after 7.291437617s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1009 18:41:12.564611   14880 retry.go:31] will retry after 33.083860087s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1009 18:41:45.649544   14880 retry.go:31] will retry after 35.984370693s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test_pvc_test.go:50: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753440 -n functional-753440
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753440 -n functional-753440: exit status 2 (300.592817ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-753440" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753440
helpers_test.go:243: (dbg) docker inspect functional-753440:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205",
	        "Created": "2025-10-09T18:13:38.612842612Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 29511,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:13:38.64668907Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/hostname",
	        "HostsPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/hosts",
	        "LogPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205-json.log",
	        "Name": "/functional-753440",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753440:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-753440",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205",
	                "LowerDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753440",
	                "Source": "/var/lib/docker/volumes/functional-753440/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753440",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753440",
	                "name.minikube.sigs.k8s.io": "functional-753440",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d81e656cb7fd298b6be7b84ddafb7e6d0b2df1b9904e1c444b24eb780385409d",
	            "SandboxKey": "/var/run/docker/netns/d81e656cb7fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753440": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:52:a9:f3:ce:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d69cee380b2506f35d197ee18a95b90b110e191b547e1220873c5484ffc92ad3",
	                    "EndpointID": "2f780bc31b7359d4036c8b32e09c7f7657923ca8c46e8392506706282465c3ec",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753440",
	                        "694bf539948e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-753440 -n functional-753440
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-753440 -n functional-753440: exit status 2 (295.419202ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 logs -n 25
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                           ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-753440 ssh findmnt -T /mount2                                                                                  │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image          │ functional-753440 image rm kicbase/echo-server:functional-753440 --alsologtostderr                                        │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh            │ functional-753440 ssh findmnt -T /mount3                                                                                  │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image          │ functional-753440 image ls                                                                                                │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ mount          │ -p functional-753440 --kill=true                                                                                          │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ image          │ functional-753440 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image          │ functional-753440 image save --daemon kicbase/echo-server:functional-753440 --alsologtostderr                             │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh            │ functional-753440 ssh sudo cat /etc/ssl/certs/14880.pem                                                                   │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh            │ functional-753440 ssh sudo cat /usr/share/ca-certificates/14880.pem                                                       │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh            │ functional-753440 ssh sudo cat /etc/ssl/certs/51391683.0                                                                  │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh            │ functional-753440 ssh sudo cat /etc/ssl/certs/148802.pem                                                                  │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh            │ functional-753440 ssh sudo cat /usr/share/ca-certificates/148802.pem                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh            │ functional-753440 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                  │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh            │ functional-753440 ssh sudo cat /etc/test/nested/copy/14880/hosts                                                          │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ dashboard      │ --url --port 36195 -p functional-753440 --alsologtostderr -v=1                                                            │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ image          │ functional-753440 image ls --format short --alsologtostderr                                                               │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image          │ functional-753440 image ls --format yaml --alsologtostderr                                                                │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh            │ functional-753440 ssh pgrep buildkitd                                                                                     │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ image          │ functional-753440 image build -t localhost/my-image:functional-753440 testdata/build --alsologtostderr                    │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image          │ functional-753440 image ls --format json --alsologtostderr                                                                │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image          │ functional-753440 image ls --format table --alsologtostderr                                                               │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                                   │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                                   │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                                   │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image          │ functional-753440 image ls                                                                                                │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:40:41
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:40:41.059621   59814 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:40:41.059885   59814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:40:41.059896   59814 out.go:374] Setting ErrFile to fd 2...
	I1009 18:40:41.059899   59814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:40:41.060215   59814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:40:41.060650   59814 out.go:368] Setting JSON to false
	I1009 18:40:41.061515   59814 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4989,"bootTime":1760030252,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:40:41.061609   59814 start.go:141] virtualization: kvm guest
	I1009 18:40:41.063781   59814 out.go:179] * [functional-753440] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1009 18:40:41.065771   59814 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:40:41.065764   59814 notify.go:220] Checking for updates...
	I1009 18:40:41.068913   59814 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:40:41.070481   59814 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:40:41.071797   59814 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:40:41.073119   59814 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:40:41.074623   59814 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:40:41.076619   59814 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:40:41.077037   59814 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:40:41.102735   59814 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:40:41.102838   59814 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:40:41.165489   59814 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:40:41.154761452 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:40:41.165636   59814 docker.go:318] overlay module found
	I1009 18:40:41.167894   59814 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1009 18:40:41.169565   59814 start.go:305] selected driver: docker
	I1009 18:40:41.169585   59814 start.go:925] validating driver "docker" against &{Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:40:41.169700   59814 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:40:41.172117   59814 out.go:203] 
	W1009 18:40:41.173651   59814 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1009 18:40:41.175097   59814 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 18:44:24 functional-753440 crio[5806]: time="2025-10-09T18:44:24.563070698Z" level=info msg="createCtr: removing container f3247a9daf89b1090806cc152c9ff99de70557a30f80415e8a91d47f244efa80" id=d91b429d-ed41-45cb-8896-bad65b9e1b4d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:44:24 functional-753440 crio[5806]: time="2025-10-09T18:44:24.563104981Z" level=info msg="createCtr: deleting container f3247a9daf89b1090806cc152c9ff99de70557a30f80415e8a91d47f244efa80 from storage" id=d91b429d-ed41-45cb-8896-bad65b9e1b4d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:44:24 functional-753440 crio[5806]: time="2025-10-09T18:44:24.565198031Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-753440_kube-system_894f77eb6f96f2cc2bf4bdca611e7cdb_0" id=d91b429d-ed41-45cb-8896-bad65b9e1b4d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:44:30 functional-753440 crio[5806]: time="2025-10-09T18:44:30.536675174Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=560fc249-2018-479c-b4ec-a0d533493e71 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:44:30 functional-753440 crio[5806]: time="2025-10-09T18:44:30.537579754Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=45ccc05d-8901-43f1-a36d-c576fff82be1 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:44:30 functional-753440 crio[5806]: time="2025-10-09T18:44:30.538423628Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-753440/kube-scheduler" id=40571601-bcf3-420c-8e86-363c1f90d399 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:44:30 functional-753440 crio[5806]: time="2025-10-09T18:44:30.538630496Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:44:30 functional-753440 crio[5806]: time="2025-10-09T18:44:30.541837248Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:44:30 functional-753440 crio[5806]: time="2025-10-09T18:44:30.542417726Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:44:30 functional-753440 crio[5806]: time="2025-10-09T18:44:30.558009321Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=40571601-bcf3-420c-8e86-363c1f90d399 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:44:30 functional-753440 crio[5806]: time="2025-10-09T18:44:30.559441212Z" level=info msg="createCtr: deleting container ID 0e62faf460fd1d96578eb68f0e45647befe547317af6f9d032060bd829de3e2c from idIndex" id=40571601-bcf3-420c-8e86-363c1f90d399 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:44:30 functional-753440 crio[5806]: time="2025-10-09T18:44:30.559476412Z" level=info msg="createCtr: removing container 0e62faf460fd1d96578eb68f0e45647befe547317af6f9d032060bd829de3e2c" id=40571601-bcf3-420c-8e86-363c1f90d399 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:44:30 functional-753440 crio[5806]: time="2025-10-09T18:44:30.559509347Z" level=info msg="createCtr: deleting container 0e62faf460fd1d96578eb68f0e45647befe547317af6f9d032060bd829de3e2c from storage" id=40571601-bcf3-420c-8e86-363c1f90d399 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:44:30 functional-753440 crio[5806]: time="2025-10-09T18:44:30.561460883Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-753440_kube-system_c3332277da3037b9d30e61510b9fdccb_0" id=40571601-bcf3-420c-8e86-363c1f90d399 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:44:32 functional-753440 crio[5806]: time="2025-10-09T18:44:32.536722995Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=2805f641-f30d-4b97-a702-03f1d74e232f name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:44:32 functional-753440 crio[5806]: time="2025-10-09T18:44:32.537646493Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=823f1940-3e7e-4f00-bb72-0c3a37a6a028 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:44:32 functional-753440 crio[5806]: time="2025-10-09T18:44:32.538545763Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-753440/kube-apiserver" id=b64d90eb-48ba-4b89-a602-1b7b8f18249c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:44:32 functional-753440 crio[5806]: time="2025-10-09T18:44:32.53881831Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:44:32 functional-753440 crio[5806]: time="2025-10-09T18:44:32.542275312Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:44:32 functional-753440 crio[5806]: time="2025-10-09T18:44:32.542761388Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:44:32 functional-753440 crio[5806]: time="2025-10-09T18:44:32.563594489Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=b64d90eb-48ba-4b89-a602-1b7b8f18249c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:44:32 functional-753440 crio[5806]: time="2025-10-09T18:44:32.56499651Z" level=info msg="createCtr: deleting container ID ed261d002fa2767505494d695a30efdbb3f7b4de5d486da1ca7753e5801141eb from idIndex" id=b64d90eb-48ba-4b89-a602-1b7b8f18249c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:44:32 functional-753440 crio[5806]: time="2025-10-09T18:44:32.565035226Z" level=info msg="createCtr: removing container ed261d002fa2767505494d695a30efdbb3f7b4de5d486da1ca7753e5801141eb" id=b64d90eb-48ba-4b89-a602-1b7b8f18249c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:44:32 functional-753440 crio[5806]: time="2025-10-09T18:44:32.565101689Z" level=info msg="createCtr: deleting container ed261d002fa2767505494d695a30efdbb3f7b4de5d486da1ca7753e5801141eb from storage" id=b64d90eb-48ba-4b89-a602-1b7b8f18249c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:44:32 functional-753440 crio[5806]: time="2025-10-09T18:44:32.567206087Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-753440_kube-system_0d946ec5c615de29dae011722e300735_0" id=b64d90eb-48ba-4b89-a602-1b7b8f18249c name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:44:36.091855   19159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:44:36.092394   19159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:44:36.093898   19159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:44:36.094418   19159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:44:36.095961   19159 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:44:36 up  1:27,  0 user,  load average: 0.11, 0.11, 0.09
	Linux functional-753440 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 18:44:25 functional-753440 kubelet[14909]: E1009 18:44:25.440069   14909 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-753440.186ce67effdf6217\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-753440.186ce67effdf6217  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-753440,UID:functional-753440,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-753440 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-753440,},FirstTimestamp:2025-10-09 18:36:27.528118807 +0000 UTC m=+0.734806096,LastTimestamp:2025-10-09 18:36:27.529420334 +0000 UTC m=+0.736107626,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Repo
rtingInstance:functional-753440,}"
	Oct 09 18:44:27 functional-753440 kubelet[14909]: E1009 18:44:27.567277   14909 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-753440\" not found"
	Oct 09 18:44:27 functional-753440 kubelet[14909]: E1009 18:44:27.708051   14909 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 09 18:44:29 functional-753440 kubelet[14909]: E1009 18:44:29.193719   14909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-753440?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 09 18:44:29 functional-753440 kubelet[14909]: I1009 18:44:29.386091   14909 kubelet_node_status.go:75] "Attempting to register node" node="functional-753440"
	Oct 09 18:44:29 functional-753440 kubelet[14909]: E1009 18:44:29.386514   14909 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-753440"
	Oct 09 18:44:30 functional-753440 kubelet[14909]: E1009 18:44:30.536278   14909 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753440\" not found" node="functional-753440"
	Oct 09 18:44:30 functional-753440 kubelet[14909]: E1009 18:44:30.561769   14909 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:44:30 functional-753440 kubelet[14909]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:44:30 functional-753440 kubelet[14909]:  > podSandboxID="7a4353736f4a4433982204579f641a25b7ce51b570588adf77ed233c5025e9dc"
	Oct 09 18:44:30 functional-753440 kubelet[14909]: E1009 18:44:30.561870   14909 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:44:30 functional-753440 kubelet[14909]:         container kube-scheduler start failed in pod kube-scheduler-functional-753440_kube-system(c3332277da3037b9d30e61510b9fdccb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:44:30 functional-753440 kubelet[14909]:  > logger="UnhandledError"
	Oct 09 18:44:30 functional-753440 kubelet[14909]: E1009 18:44:30.561899   14909 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-753440" podUID="c3332277da3037b9d30e61510b9fdccb"
	Oct 09 18:44:31 functional-753440 kubelet[14909]: E1009 18:44:31.614508   14909 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 09 18:44:32 functional-753440 kubelet[14909]: E1009 18:44:32.536309   14909 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753440\" not found" node="functional-753440"
	Oct 09 18:44:32 functional-753440 kubelet[14909]: E1009 18:44:32.567508   14909 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:44:32 functional-753440 kubelet[14909]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:44:32 functional-753440 kubelet[14909]:  > podSandboxID="6fa88d0d4dd2687a2039db7efc159391e5e7ed9ab6f5700abe409768183910fe"
	Oct 09 18:44:32 functional-753440 kubelet[14909]: E1009 18:44:32.567600   14909 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:44:32 functional-753440 kubelet[14909]:         container kube-apiserver start failed in pod kube-apiserver-functional-753440_kube-system(0d946ec5c615de29dae011722e300735): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:44:32 functional-753440 kubelet[14909]:  > logger="UnhandledError"
	Oct 09 18:44:32 functional-753440 kubelet[14909]: E1009 18:44:32.567629   14909 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-753440" podUID="0d946ec5c615de29dae011722e300735"
	Oct 09 18:44:34 functional-753440 kubelet[14909]: E1009 18:44:34.718397   14909 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-753440&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 09 18:44:35 functional-753440 kubelet[14909]: E1009 18:44:35.440723   14909 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-753440.186ce67effdf6217\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-753440.186ce67effdf6217  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-753440,UID:functional-753440,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-753440 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-753440,},FirstTimestamp:2025-10-09 18:36:27.528118807 +0000 UTC m=+0.734806096,LastTimestamp:2025-10-09 18:36:27.529420334 +0000 UTC m=+0.736107626,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Repo
rtingInstance:functional-753440,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753440 -n functional-753440
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753440 -n functional-753440: exit status 2 (293.378002ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753440" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (241.53s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-753440 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) Non-zero exit: kubectl --context functional-753440 replace --force -f testdata/mysql.yaml: exit status 1 (59.873513ms)

                                                
                                                
** stderr ** 
	E1009 18:40:47.195104   63166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:47.195733   63166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	unable to recognize "testdata/mysql.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused
	unable to recognize "testdata/mysql.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1800: failed to kubectl replace mysql: args "kubectl --context functional-753440 replace --force -f testdata/mysql.yaml" failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753440
helpers_test.go:243: (dbg) docker inspect functional-753440:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205",
	        "Created": "2025-10-09T18:13:38.612842612Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 29511,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:13:38.64668907Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/hostname",
	        "HostsPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/hosts",
	        "LogPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205-json.log",
	        "Name": "/functional-753440",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753440:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-753440",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205",
	                "LowerDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753440",
	                "Source": "/var/lib/docker/volumes/functional-753440/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753440",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753440",
	                "name.minikube.sigs.k8s.io": "functional-753440",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d81e656cb7fd298b6be7b84ddafb7e6d0b2df1b9904e1c444b24eb780385409d",
	            "SandboxKey": "/var/run/docker/netns/d81e656cb7fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753440": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:52:a9:f3:ce:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d69cee380b2506f35d197ee18a95b90b110e191b547e1220873c5484ffc92ad3",
	                    "EndpointID": "2f780bc31b7359d4036c8b32e09c7f7657923ca8c46e8392506706282465c3ec",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753440",
	                        "694bf539948e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-753440 -n functional-753440
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-753440 -n functional-753440: exit status 2 (309.315472ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 logs -n 25
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-753440 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image   │ functional-753440 image ls                                                                                                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image   │ functional-753440 image load --daemon kicbase/echo-server:functional-753440 --alsologtostderr                                                                   │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh     │ functional-753440 ssh -- ls -la /mount-9p                                                                                                                       │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh     │ functional-753440 ssh sudo umount -f /mount-9p                                                                                                                  │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ image   │ functional-753440 image ls                                                                                                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ mount   │ -p functional-753440 /tmp/TestFunctionalparallelMountCmdVerifyCleanup817654199/001:/mount1 --alsologtostderr -v=1                                               │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ ssh     │ functional-753440 ssh findmnt -T /mount1                                                                                                                        │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ mount   │ -p functional-753440 /tmp/TestFunctionalparallelMountCmdVerifyCleanup817654199/001:/mount3 --alsologtostderr -v=1                                               │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ mount   │ -p functional-753440 /tmp/TestFunctionalparallelMountCmdVerifyCleanup817654199/001:/mount2 --alsologtostderr -v=1                                               │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ image   │ functional-753440 image load --daemon kicbase/echo-server:functional-753440 --alsologtostderr                                                                   │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image   │ functional-753440 image ls                                                                                                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh     │ functional-753440 ssh findmnt -T /mount1                                                                                                                        │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image   │ functional-753440 image save kicbase/echo-server:functional-753440 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh     │ functional-753440 ssh findmnt -T /mount2                                                                                                                        │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image   │ functional-753440 image rm kicbase/echo-server:functional-753440 --alsologtostderr                                                                              │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh     │ functional-753440 ssh findmnt -T /mount3                                                                                                                        │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image   │ functional-753440 image ls                                                                                                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ mount   │ -p functional-753440 --kill=true                                                                                                                                │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ image   │ functional-753440 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image   │ functional-753440 image save --daemon kicbase/echo-server:functional-753440 --alsologtostderr                                                                   │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh     │ functional-753440 ssh sudo cat /etc/ssl/certs/14880.pem                                                                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh     │ functional-753440 ssh sudo cat /usr/share/ca-certificates/14880.pem                                                                                             │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh     │ functional-753440 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh     │ functional-753440 ssh sudo cat /etc/ssl/certs/148802.pem                                                                                                        │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:40:41
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:40:41.059621   59814 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:40:41.059885   59814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:40:41.059896   59814 out.go:374] Setting ErrFile to fd 2...
	I1009 18:40:41.059899   59814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:40:41.060215   59814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:40:41.060650   59814 out.go:368] Setting JSON to false
	I1009 18:40:41.061515   59814 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4989,"bootTime":1760030252,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:40:41.061609   59814 start.go:141] virtualization: kvm guest
	I1009 18:40:41.063781   59814 out.go:179] * [functional-753440] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1009 18:40:41.065771   59814 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:40:41.065764   59814 notify.go:220] Checking for updates...
	I1009 18:40:41.068913   59814 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:40:41.070481   59814 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:40:41.071797   59814 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:40:41.073119   59814 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:40:41.074623   59814 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:40:41.076619   59814 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:40:41.077037   59814 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:40:41.102735   59814 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:40:41.102838   59814 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:40:41.165489   59814 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:40:41.154761452 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:40:41.165636   59814 docker.go:318] overlay module found
	I1009 18:40:41.167894   59814 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1009 18:40:41.169565   59814 start.go:305] selected driver: docker
	I1009 18:40:41.169585   59814 start.go:925] validating driver "docker" against &{Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:40:41.169700   59814 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:40:41.172117   59814 out.go:203] 
	W1009 18:40:41.173651   59814 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1009 18:40:41.175097   59814 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 18:40:44 functional-753440 crio[5806]: time="2025-10-09T18:40:44.82623148Z" level=info msg="Checking image status: kicbase/echo-server:functional-753440" id=976fa83d-ab23-4f19-b44b-afd04ec7a9e3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:44 functional-753440 crio[5806]: time="2025-10-09T18:40:44.850520799Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-753440" id=2d40d6ea-f45d-4259-b781-6d4cac2194f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:44 functional-753440 crio[5806]: time="2025-10-09T18:40:44.850632738Z" level=info msg="Image docker.io/kicbase/echo-server:functional-753440 not found" id=2d40d6ea-f45d-4259-b781-6d4cac2194f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:44 functional-753440 crio[5806]: time="2025-10-09T18:40:44.850662236Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-753440 found" id=2d40d6ea-f45d-4259-b781-6d4cac2194f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:44 functional-753440 crio[5806]: time="2025-10-09T18:40:44.8758151Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-753440" id=5b3b0ae1-4a11-42e2-aaed-d29f883acbd6 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:44 functional-753440 crio[5806]: time="2025-10-09T18:40:44.875947263Z" level=info msg="Image localhost/kicbase/echo-server:functional-753440 not found" id=5b3b0ae1-4a11-42e2-aaed-d29f883acbd6 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:44 functional-753440 crio[5806]: time="2025-10-09T18:40:44.875977055Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-753440 found" id=5b3b0ae1-4a11-42e2-aaed-d29f883acbd6 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:45 functional-753440 crio[5806]: time="2025-10-09T18:40:45.627905355Z" level=info msg="Checking image status: kicbase/echo-server:functional-753440" id=69d18627-4136-4431-8c05-635fa6e2e52c name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:45 functional-753440 crio[5806]: time="2025-10-09T18:40:45.654094947Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-753440" id=f2b31bc3-fbff-4e8d-9be2-dbb89d1a45b8 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:45 functional-753440 crio[5806]: time="2025-10-09T18:40:45.654244391Z" level=info msg="Image docker.io/kicbase/echo-server:functional-753440 not found" id=f2b31bc3-fbff-4e8d-9be2-dbb89d1a45b8 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:45 functional-753440 crio[5806]: time="2025-10-09T18:40:45.654281726Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-753440 found" id=f2b31bc3-fbff-4e8d-9be2-dbb89d1a45b8 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:45 functional-753440 crio[5806]: time="2025-10-09T18:40:45.680627494Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-753440" id=05b08adb-5802-4c34-8620-5dfc4da1ad5f name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:45 functional-753440 crio[5806]: time="2025-10-09T18:40:45.680746847Z" level=info msg="Image localhost/kicbase/echo-server:functional-753440 not found" id=05b08adb-5802-4c34-8620-5dfc4da1ad5f name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:45 functional-753440 crio[5806]: time="2025-10-09T18:40:45.680775592Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-753440 found" id=05b08adb-5802-4c34-8620-5dfc4da1ad5f name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.536545286Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=9d716c6c-0b36-444d-9a43-145939f5140c name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.537509174Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=8bfc6c23-9671-46cd-b2f2-e852de7a72f4 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.538661822Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-753440/kube-controller-manager" id=09dd28a9-7698-49da-9c11-bc1bd2156e16 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.538884513Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.543014511Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.543599307Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.559854128Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=09dd28a9-7698-49da-9c11-bc1bd2156e16 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.561422863Z" level=info msg="createCtr: deleting container ID 1d85108123728577edabc2bbaf503ed235cb75b6ab86cdd9cdcfba3c8e1f5386 from idIndex" id=09dd28a9-7698-49da-9c11-bc1bd2156e16 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.561465605Z" level=info msg="createCtr: removing container 1d85108123728577edabc2bbaf503ed235cb75b6ab86cdd9cdcfba3c8e1f5386" id=09dd28a9-7698-49da-9c11-bc1bd2156e16 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.561505744Z" level=info msg="createCtr: deleting container 1d85108123728577edabc2bbaf503ed235cb75b6ab86cdd9cdcfba3c8e1f5386 from storage" id=09dd28a9-7698-49da-9c11-bc1bd2156e16 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.564276404Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-753440_kube-system_ddd5b817e547272bbbe5e6f0c16b8e98_0" id=09dd28a9-7698-49da-9c11-bc1bd2156e16 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:40:48.117286   17845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:48.117791   17845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:48.119407   17845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:48.119812   17845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:48.121375   17845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:40:48 up  1:23,  0 user,  load average: 0.53, 0.15, 0.10
	Linux functional-753440 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 18:40:40 functional-753440 kubelet[14909]:  > logger="UnhandledError"
	Oct 09 18:40:40 functional-753440 kubelet[14909]: E1009 18:40:40.566440   14909 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-753440" podUID="0d946ec5c615de29dae011722e300735"
	Oct 09 18:40:41 functional-753440 kubelet[14909]: E1009 18:40:41.345009   14909 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-753440.186ce67effdfc72b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-753440,UID:functional-753440,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-753440 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-753440,},FirstTimestamp:2025-10-09 18:36:27.528144683 +0000 UTC m=+0.734831963,LastTimestamp:2025-10-09 18:36:27.528144683 +0000 UTC m=+0.734831963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-753440,}"
	Oct 09 18:40:41 functional-753440 kubelet[14909]: E1009 18:40:41.535593   14909 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753440\" not found" node="functional-753440"
	Oct 09 18:40:41 functional-753440 kubelet[14909]: E1009 18:40:41.569692   14909 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:40:41 functional-753440 kubelet[14909]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:41 functional-753440 kubelet[14909]:  > podSandboxID="7e16b1bb2bf2df093cc66fa197bd5344740cdfe9b099dcd26ba3fc1c3435b769"
	Oct 09 18:40:41 functional-753440 kubelet[14909]: E1009 18:40:41.569909   14909 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:40:41 functional-753440 kubelet[14909]:         container etcd start failed in pod etcd-functional-753440_kube-system(894f77eb6f96f2cc2bf4bdca611e7cdb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:41 functional-753440 kubelet[14909]:  > logger="UnhandledError"
	Oct 09 18:40:41 functional-753440 kubelet[14909]: E1009 18:40:41.569951   14909 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-753440" podUID="894f77eb6f96f2cc2bf4bdca611e7cdb"
	Oct 09 18:40:43 functional-753440 kubelet[14909]: E1009 18:40:43.707727   14909 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 09 18:40:45 functional-753440 kubelet[14909]: E1009 18:40:45.161474   14909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-753440?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 09 18:40:45 functional-753440 kubelet[14909]: E1009 18:40:45.198207   14909 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 09 18:40:45 functional-753440 kubelet[14909]: I1009 18:40:45.320113   14909 kubelet_node_status.go:75] "Attempting to register node" node="functional-753440"
	Oct 09 18:40:45 functional-753440 kubelet[14909]: E1009 18:40:45.320518   14909 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-753440"
	Oct 09 18:40:46 functional-753440 kubelet[14909]: E1009 18:40:46.535997   14909 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753440\" not found" node="functional-753440"
	Oct 09 18:40:46 functional-753440 kubelet[14909]: E1009 18:40:46.564766   14909 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:40:46 functional-753440 kubelet[14909]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:46 functional-753440 kubelet[14909]:  > podSandboxID="fb34d4f739975f6378a39e225741fb0e80fac36aeda99b2080b81999ee15d221"
	Oct 09 18:40:46 functional-753440 kubelet[14909]: E1009 18:40:46.564854   14909 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:40:46 functional-753440 kubelet[14909]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-753440_kube-system(ddd5b817e547272bbbe5e6f0c16b8e98): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:46 functional-753440 kubelet[14909]:  > logger="UnhandledError"
	Oct 09 18:40:46 functional-753440 kubelet[14909]: E1009 18:40:46.564885   14909 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-753440" podUID="ddd5b817e547272bbbe5e6f0c16b8e98"
	Oct 09 18:40:47 functional-753440 kubelet[14909]: E1009 18:40:47.551723   14909 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-753440\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753440 -n functional-753440
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753440 -n functional-753440: exit status 2 (303.29975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753440" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/MySQL (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-753440 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-753440 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (52.985773ms)

                                                
                                                
** stderr ** 
	E1009 18:40:45.825860   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:45.826327   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:45.827508   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:45.827813   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:45.829387   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-753440 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	E1009 18:40:45.825860   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:45.826327   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:45.827508   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:45.827813   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:45.829387   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	E1009 18:40:45.825860   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:45.826327   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:45.827508   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:45.827813   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:45.829387   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	E1009 18:40:45.825860   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:45.826327   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:45.827508   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:45.827813   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:45.829387   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	E1009 18:40:45.825860   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:45.826327   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:45.827508   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:45.827813   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:45.829387   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	E1009 18:40:45.825860   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:45.826327   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:45.827508   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:45.827813   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:40:45.829387   62293 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-753440
helpers_test.go:243: (dbg) docker inspect functional-753440:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205",
	        "Created": "2025-10-09T18:13:38.612842612Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 29511,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:13:38.64668907Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/hostname",
	        "HostsPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/hosts",
	        "LogPath": "/var/lib/docker/containers/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205/694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205-json.log",
	        "Name": "/functional-753440",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-753440:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-753440",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "694bf539948e87abad4b5535f8aae1d4ceb2c4a18fe44b13ad7ede52e5c70205",
	                "LowerDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cfd4c662af8ac056d7d5fec90efe5410ba4629dcf04d7683cd2ac5a37b88a862/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-753440",
	                "Source": "/var/lib/docker/volumes/functional-753440/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-753440",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-753440",
	                "name.minikube.sigs.k8s.io": "functional-753440",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d81e656cb7fd298b6be7b84ddafb7e6d0b2df1b9904e1c444b24eb780385409d",
	            "SandboxKey": "/var/run/docker/netns/d81e656cb7fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-753440": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:52:a9:f3:ce:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d69cee380b2506f35d197ee18a95b90b110e191b547e1220873c5484ffc92ad3",
	                    "EndpointID": "2f780bc31b7359d4036c8b32e09c7f7657923ca8c46e8392506706282465c3ec",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-753440",
	                        "694bf539948e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-753440 -n functional-753440
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-753440 -n functional-753440: exit status 2 (305.8384ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 logs -n 25
helpers_test.go:260: TestFunctional/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-753440 ssh sudo systemctl is-active containerd                                                                                                       │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ ssh     │ functional-753440 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ mount   │ -p functional-753440 /tmp/TestFunctionalparallelMountCmdspecific-port3245943124/001:/mount-9p --alsologtostderr -v=1 --port 46464                               │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ image   │ functional-753440 image load --daemon kicbase/echo-server:functional-753440 --alsologtostderr                                                                   │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh     │ functional-753440 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image   │ functional-753440 image ls                                                                                                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image   │ functional-753440 image load --daemon kicbase/echo-server:functional-753440 --alsologtostderr                                                                   │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh     │ functional-753440 ssh -- ls -la /mount-9p                                                                                                                       │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh     │ functional-753440 ssh sudo umount -f /mount-9p                                                                                                                  │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ image   │ functional-753440 image ls                                                                                                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ mount   │ -p functional-753440 /tmp/TestFunctionalparallelMountCmdVerifyCleanup817654199/001:/mount1 --alsologtostderr -v=1                                               │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ ssh     │ functional-753440 ssh findmnt -T /mount1                                                                                                                        │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ mount   │ -p functional-753440 /tmp/TestFunctionalparallelMountCmdVerifyCleanup817654199/001:/mount3 --alsologtostderr -v=1                                               │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ mount   │ -p functional-753440 /tmp/TestFunctionalparallelMountCmdVerifyCleanup817654199/001:/mount2 --alsologtostderr -v=1                                               │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ image   │ functional-753440 image load --daemon kicbase/echo-server:functional-753440 --alsologtostderr                                                                   │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image   │ functional-753440 image ls                                                                                                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh     │ functional-753440 ssh findmnt -T /mount1                                                                                                                        │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image   │ functional-753440 image save kicbase/echo-server:functional-753440 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh     │ functional-753440 ssh findmnt -T /mount2                                                                                                                        │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image   │ functional-753440 image rm kicbase/echo-server:functional-753440 --alsologtostderr                                                                              │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh     │ functional-753440 ssh findmnt -T /mount3                                                                                                                        │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image   │ functional-753440 image ls                                                                                                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ mount   │ -p functional-753440 --kill=true                                                                                                                                │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ image   │ functional-753440 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image   │ functional-753440 image save --daemon kicbase/echo-server:functional-753440 --alsologtostderr                                                                   │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:40:41
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:40:41.059621   59814 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:40:41.059885   59814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:40:41.059896   59814 out.go:374] Setting ErrFile to fd 2...
	I1009 18:40:41.059899   59814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:40:41.060215   59814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:40:41.060650   59814 out.go:368] Setting JSON to false
	I1009 18:40:41.061515   59814 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4989,"bootTime":1760030252,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:40:41.061609   59814 start.go:141] virtualization: kvm guest
	I1009 18:40:41.063781   59814 out.go:179] * [functional-753440] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1009 18:40:41.065771   59814 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:40:41.065764   59814 notify.go:220] Checking for updates...
	I1009 18:40:41.068913   59814 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:40:41.070481   59814 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:40:41.071797   59814 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:40:41.073119   59814 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:40:41.074623   59814 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:40:41.076619   59814 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:40:41.077037   59814 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:40:41.102735   59814 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:40:41.102838   59814 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:40:41.165489   59814 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:40:41.154761452 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:40:41.165636   59814 docker.go:318] overlay module found
	I1009 18:40:41.167894   59814 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1009 18:40:41.169565   59814 start.go:305] selected driver: docker
	I1009 18:40:41.169585   59814 start.go:925] validating driver "docker" against &{Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:40:41.169700   59814 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:40:41.172117   59814 out.go:203] 
	W1009 18:40:41.173651   59814 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1009 18:40:41.175097   59814 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 18:40:44 functional-753440 crio[5806]: time="2025-10-09T18:40:44.82623148Z" level=info msg="Checking image status: kicbase/echo-server:functional-753440" id=976fa83d-ab23-4f19-b44b-afd04ec7a9e3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:44 functional-753440 crio[5806]: time="2025-10-09T18:40:44.850520799Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-753440" id=2d40d6ea-f45d-4259-b781-6d4cac2194f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:44 functional-753440 crio[5806]: time="2025-10-09T18:40:44.850632738Z" level=info msg="Image docker.io/kicbase/echo-server:functional-753440 not found" id=2d40d6ea-f45d-4259-b781-6d4cac2194f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:44 functional-753440 crio[5806]: time="2025-10-09T18:40:44.850662236Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-753440 found" id=2d40d6ea-f45d-4259-b781-6d4cac2194f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:44 functional-753440 crio[5806]: time="2025-10-09T18:40:44.8758151Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-753440" id=5b3b0ae1-4a11-42e2-aaed-d29f883acbd6 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:44 functional-753440 crio[5806]: time="2025-10-09T18:40:44.875947263Z" level=info msg="Image localhost/kicbase/echo-server:functional-753440 not found" id=5b3b0ae1-4a11-42e2-aaed-d29f883acbd6 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:44 functional-753440 crio[5806]: time="2025-10-09T18:40:44.875977055Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-753440 found" id=5b3b0ae1-4a11-42e2-aaed-d29f883acbd6 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:45 functional-753440 crio[5806]: time="2025-10-09T18:40:45.627905355Z" level=info msg="Checking image status: kicbase/echo-server:functional-753440" id=69d18627-4136-4431-8c05-635fa6e2e52c name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:45 functional-753440 crio[5806]: time="2025-10-09T18:40:45.654094947Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-753440" id=f2b31bc3-fbff-4e8d-9be2-dbb89d1a45b8 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:45 functional-753440 crio[5806]: time="2025-10-09T18:40:45.654244391Z" level=info msg="Image docker.io/kicbase/echo-server:functional-753440 not found" id=f2b31bc3-fbff-4e8d-9be2-dbb89d1a45b8 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:45 functional-753440 crio[5806]: time="2025-10-09T18:40:45.654281726Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-753440 found" id=f2b31bc3-fbff-4e8d-9be2-dbb89d1a45b8 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:45 functional-753440 crio[5806]: time="2025-10-09T18:40:45.680627494Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-753440" id=05b08adb-5802-4c34-8620-5dfc4da1ad5f name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:45 functional-753440 crio[5806]: time="2025-10-09T18:40:45.680746847Z" level=info msg="Image localhost/kicbase/echo-server:functional-753440 not found" id=05b08adb-5802-4c34-8620-5dfc4da1ad5f name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:45 functional-753440 crio[5806]: time="2025-10-09T18:40:45.680775592Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-753440 found" id=05b08adb-5802-4c34-8620-5dfc4da1ad5f name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.536545286Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=9d716c6c-0b36-444d-9a43-145939f5140c name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.537509174Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=8bfc6c23-9671-46cd-b2f2-e852de7a72f4 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.538661822Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-753440/kube-controller-manager" id=09dd28a9-7698-49da-9c11-bc1bd2156e16 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.538884513Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.543014511Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.543599307Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.559854128Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=09dd28a9-7698-49da-9c11-bc1bd2156e16 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.561422863Z" level=info msg="createCtr: deleting container ID 1d85108123728577edabc2bbaf503ed235cb75b6ab86cdd9cdcfba3c8e1f5386 from idIndex" id=09dd28a9-7698-49da-9c11-bc1bd2156e16 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.561465605Z" level=info msg="createCtr: removing container 1d85108123728577edabc2bbaf503ed235cb75b6ab86cdd9cdcfba3c8e1f5386" id=09dd28a9-7698-49da-9c11-bc1bd2156e16 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.561505744Z" level=info msg="createCtr: deleting container 1d85108123728577edabc2bbaf503ed235cb75b6ab86cdd9cdcfba3c8e1f5386 from storage" id=09dd28a9-7698-49da-9c11-bc1bd2156e16 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:40:46 functional-753440 crio[5806]: time="2025-10-09T18:40:46.564276404Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-753440_kube-system_ddd5b817e547272bbbe5e6f0c16b8e98_0" id=09dd28a9-7698-49da-9c11-bc1bd2156e16 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:40:46.751051   17640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:46.751668   17640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:46.753321   17640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:46.753942   17640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1009 18:40:46.755579   17640 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:40:46 up  1:23,  0 user,  load average: 0.53, 0.15, 0.10
	Linux functional-753440 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 18:40:40 functional-753440 kubelet[14909]:         container kube-apiserver start failed in pod kube-apiserver-functional-753440_kube-system(0d946ec5c615de29dae011722e300735): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:40 functional-753440 kubelet[14909]:  > logger="UnhandledError"
	Oct 09 18:40:40 functional-753440 kubelet[14909]: E1009 18:40:40.566440   14909 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-753440" podUID="0d946ec5c615de29dae011722e300735"
	Oct 09 18:40:41 functional-753440 kubelet[14909]: E1009 18:40:41.345009   14909 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-753440.186ce67effdfc72b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-753440,UID:functional-753440,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-753440 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-753440,},FirstTimestamp:2025-10-09 18:36:27.528144683 +0000 UTC m=+0.734831963,LastTimestamp:2025-10-09 18:36:27.528144683 +0000 UTC m=+0.734831963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-753440,}"
	Oct 09 18:40:41 functional-753440 kubelet[14909]: E1009 18:40:41.535593   14909 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753440\" not found" node="functional-753440"
	Oct 09 18:40:41 functional-753440 kubelet[14909]: E1009 18:40:41.569692   14909 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:40:41 functional-753440 kubelet[14909]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:41 functional-753440 kubelet[14909]:  > podSandboxID="7e16b1bb2bf2df093cc66fa197bd5344740cdfe9b099dcd26ba3fc1c3435b769"
	Oct 09 18:40:41 functional-753440 kubelet[14909]: E1009 18:40:41.569909   14909 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:40:41 functional-753440 kubelet[14909]:         container etcd start failed in pod etcd-functional-753440_kube-system(894f77eb6f96f2cc2bf4bdca611e7cdb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:41 functional-753440 kubelet[14909]:  > logger="UnhandledError"
	Oct 09 18:40:41 functional-753440 kubelet[14909]: E1009 18:40:41.569951   14909 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-753440" podUID="894f77eb6f96f2cc2bf4bdca611e7cdb"
	Oct 09 18:40:43 functional-753440 kubelet[14909]: E1009 18:40:43.707727   14909 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 09 18:40:45 functional-753440 kubelet[14909]: E1009 18:40:45.161474   14909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-753440?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 09 18:40:45 functional-753440 kubelet[14909]: E1009 18:40:45.198207   14909 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 09 18:40:45 functional-753440 kubelet[14909]: I1009 18:40:45.320113   14909 kubelet_node_status.go:75] "Attempting to register node" node="functional-753440"
	Oct 09 18:40:45 functional-753440 kubelet[14909]: E1009 18:40:45.320518   14909 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-753440"
	Oct 09 18:40:46 functional-753440 kubelet[14909]: E1009 18:40:46.535997   14909 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-753440\" not found" node="functional-753440"
	Oct 09 18:40:46 functional-753440 kubelet[14909]: E1009 18:40:46.564766   14909 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:40:46 functional-753440 kubelet[14909]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:46 functional-753440 kubelet[14909]:  > podSandboxID="fb34d4f739975f6378a39e225741fb0e80fac36aeda99b2080b81999ee15d221"
	Oct 09 18:40:46 functional-753440 kubelet[14909]: E1009 18:40:46.564854   14909 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:40:46 functional-753440 kubelet[14909]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-753440_kube-system(ddd5b817e547272bbbe5e6f0c16b8e98): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:40:46 functional-753440 kubelet[14909]:  > logger="UnhandledError"
	Oct 09 18:40:46 functional-753440 kubelet[14909]: E1009 18:40:46.564885   14909 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-753440" podUID="ddd5b817e547272bbbe5e6f0c16b8e98"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753440 -n functional-753440
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-753440 -n functional-753440: exit status 2 (311.31192ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-753440" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-753440 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-753440 create deployment hello-node --image kicbase/echo-server: exit status 1 (66.610169ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-753440 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753440 service list: exit status 103 (289.359315ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-753440 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-753440"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-amd64 -p functional-753440 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-753440 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-753440\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753440 service list -o json: exit status 103 (365.253092ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-753440 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-753440"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-amd64 -p functional-753440 service list -o json": exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-753440 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-753440 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1009 18:40:34.303166   55619 out.go:360] Setting OutFile to fd 1 ...
I1009 18:40:34.303338   55619 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:40:34.303346   55619 out.go:374] Setting ErrFile to fd 2...
I1009 18:40:34.303351   55619 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:40:34.303712   55619 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
I1009 18:40:34.304062   55619 mustload.go:65] Loading cluster: functional-753440
I1009 18:40:34.306642   55619 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:40:34.309429   55619 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
I1009 18:40:34.341876   55619 host.go:66] Checking if "functional-753440" exists ...
I1009 18:40:34.342286   55619 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1009 18:40:34.452087   55619 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-10-09 18:40:34.438915389 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1009 18:40:34.452307   55619 api_server.go:166] Checking apiserver status ...
I1009 18:40:34.452364   55619 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1009 18:40:34.452413   55619 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
I1009 18:40:34.475342   55619 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
W1009 18:40:34.585215   55619 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1009 18:40:34.590449   55619 out.go:179] * The control-plane node functional-753440 apiserver is not running: (state=Stopped)
I1009 18:40:34.592606   55619 out.go:179]   To start a cluster, run: "minikube start -p functional-753440"

                                                
                                                
stdout: * The control-plane node functional-753440 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-753440"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-753440 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-753440 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-753440 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-753440 tunnel --alsologtostderr] ...
helpers_test.go:519: unable to terminate pid 55618: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-753440 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-753440 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753440 service --namespace=default --https --url hello-node: exit status 103 (326.04312ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-753440 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-753440"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-753440 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-753440 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-753440 apply -f testdata/testsvc.yaml: exit status 1 (78.560105ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-753440 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (107s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1009 18:40:34.688579   14880 retry.go:31] will retry after 4.032297117s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-753440 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-753440 get svc nginx-svc: exit status 1 (50.735196ms)

                                                
                                                
** stderr ** 
	E1009 18:42:21.678901   66292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:42:21.679320   66292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:42:21.680784   66292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:42:21.681132   66292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1009 18:42:21.682507   66292 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-753440 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (107.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753440 service hello-node --url --format={{.IP}}: exit status 103 (270.164189ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-753440 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-753440"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-753440 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-753440 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-753440\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753440 service hello-node --url: exit status 103 (266.021173ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-753440 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-753440"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-753440 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-753440 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-753440"
functional_test.go:1579: failed to parse "* The control-plane node functional-753440 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-753440\"": parse "* The control-plane node functional-753440 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-753440\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-753440 /tmp/TestFunctionalparallelMountCmdany-port2713979380/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760035239342767103" to /tmp/TestFunctionalparallelMountCmdany-port2713979380/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760035239342767103" to /tmp/TestFunctionalparallelMountCmdany-port2713979380/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760035239342767103" to /tmp/TestFunctionalparallelMountCmdany-port2713979380/001/test-1760035239342767103
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753440 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (295.673526ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 18:40:39.638736   14880 retry.go:31] will retry after 569.170037ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  9 18:40 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  9 18:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  9 18:40 test-1760035239342767103
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh cat /mount-9p/test-1760035239342767103
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-753440 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-753440 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (59.329889ms)

                                                
                                                
** stderr ** 
	E1009 18:40:41.155146   59849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	error: unable to recognize "testdata/busybox-mount-test.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-753440 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctional/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753440 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (285.780343ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=36787)
	total 2
	-rw-r--r-- 1 docker docker 24 Oct  9 18:40 created-by-test
	-rw-r--r-- 1 docker docker 24 Oct  9 18:40 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Oct  9 18:40 test-1760035239342767103
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-amd64 -p functional-753440 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-753440 /tmp/TestFunctionalparallelMountCmdany-port2713979380/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-753440 /tmp/TestFunctionalparallelMountCmdany-port2713979380/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalparallelMountCmdany-port2713979380/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:36787
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port2713979380/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-753440 /tmp/TestFunctionalparallelMountCmdany-port2713979380/001:/mount-9p --alsologtostderr -v=1] stderr:
I1009 18:40:39.395069   58780 out.go:360] Setting OutFile to fd 1 ...
I1009 18:40:39.395440   58780 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:40:39.395454   58780 out.go:374] Setting ErrFile to fd 2...
I1009 18:40:39.395460   58780 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:40:39.395744   58780 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
I1009 18:40:39.396096   58780 mustload.go:65] Loading cluster: functional-753440
I1009 18:40:39.396616   58780 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:40:39.397241   58780 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
I1009 18:40:39.418766   58780 host.go:66] Checking if "functional-753440" exists ...
I1009 18:40:39.419120   58780 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1009 18:40:39.491217   58780 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:40:39.480084209 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1009 18:40:39.491423   58780 cli_runner.go:164] Run: docker network inspect functional-753440 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1009 18:40:39.512955   58780 out.go:179] * Mounting host path /tmp/TestFunctionalparallelMountCmdany-port2713979380/001 into VM as /mount-9p ...
I1009 18:40:39.515205   58780 out.go:179]   - Mount type:   9p
I1009 18:40:39.516826   58780 out.go:179]   - User ID:      docker
I1009 18:40:39.518281   58780 out.go:179]   - Group ID:     docker
I1009 18:40:39.519953   58780 out.go:179]   - Version:      9p2000.L
I1009 18:40:39.521822   58780 out.go:179]   - Message Size: 262144
I1009 18:40:39.523884   58780 out.go:179]   - Options:      map[]
I1009 18:40:39.525788   58780 out.go:179]   - Bind Address: 192.168.49.1:36787
I1009 18:40:39.527402   58780 out.go:179] * Userspace file server: 
I1009 18:40:39.527545   58780 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1009 18:40:39.527615   58780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
I1009 18:40:39.546546   58780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
I1009 18:40:39.650668   58780 mount.go:180] unmount for /mount-9p ran successfully
I1009 18:40:39.650695   58780 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1009 18:40:39.659970   58780 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=36787,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1009 18:40:39.707382   58780 main.go:125] stdlog: ufs.go:141 connected
I1009 18:40:39.707575   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tversion tag 65535 msize 262144 version '9P2000.L'
I1009 18:40:39.707624   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rversion tag 65535 msize 262144 version '9P2000'
I1009 18:40:39.707823   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1009 18:40:39.707902   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rattach tag 0 aqid (20fa06f ca4675ad 'd')
I1009 18:40:39.708261   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tstat tag 0 fid 0
I1009 18:40:39.708398   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa06f ca4675ad 'd') m d775 at 0 mt 1760035239 l 4096 t 0 d 0 ext )
I1009 18:40:39.709897   58780 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/.mount-process: {Name:mk9244562022331fd6788abe3449547b8ef78764 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1009 18:40:39.710097   58780 mount.go:105] mount successful: ""
I1009 18:40:39.712286   58780 out.go:179] * Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port2713979380/001 to /mount-9p
I1009 18:40:39.713850   58780 out.go:203] 
I1009 18:40:39.715512   58780 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1009 18:40:40.797512   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tstat tag 0 fid 0
I1009 18:40:40.797645   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa06f ca4675ad 'd') m d775 at 0 mt 1760035239 l 4096 t 0 d 0 ext )
I1009 18:40:40.798100   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Twalk tag 0 fid 0 newfid 1 
I1009 18:40:40.798175   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rwalk tag 0 
I1009 18:40:40.798366   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Topen tag 0 fid 1 mode 0
I1009 18:40:40.798418   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Ropen tag 0 qid (20fa06f ca4675ad 'd') iounit 0
I1009 18:40:40.798569   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tstat tag 0 fid 0
I1009 18:40:40.798667   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa06f ca4675ad 'd') m d775 at 0 mt 1760035239 l 4096 t 0 d 0 ext )
I1009 18:40:40.798938   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tread tag 0 fid 1 offset 0 count 262120
I1009 18:40:40.799112   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rread tag 0 count 258
I1009 18:40:40.799297   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tread tag 0 fid 1 offset 258 count 261862
I1009 18:40:40.799344   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rread tag 0 count 0
I1009 18:40:40.799532   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tread tag 0 fid 1 offset 258 count 262120
I1009 18:40:40.799577   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rread tag 0 count 0
I1009 18:40:40.799717   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1009 18:40:40.799750   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rwalk tag 0 (20fa071 ca4675ad '') 
I1009 18:40:40.799857   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tstat tag 0 fid 2
I1009 18:40:40.799935   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa071 ca4675ad '') m 644 at 0 mt 1760035239 l 24 t 0 d 0 ext )
I1009 18:40:40.800092   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tstat tag 0 fid 2
I1009 18:40:40.800204   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa071 ca4675ad '') m 644 at 0 mt 1760035239 l 24 t 0 d 0 ext )
I1009 18:40:40.800353   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tclunk tag 0 fid 2
I1009 18:40:40.800405   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rclunk tag 0
I1009 18:40:40.800538   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1009 18:40:40.800576   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rwalk tag 0 (20fa070 ca4675ad '') 
I1009 18:40:40.800660   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tstat tag 0 fid 2
I1009 18:40:40.800726   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa070 ca4675ad '') m 644 at 0 mt 1760035239 l 24 t 0 d 0 ext )
I1009 18:40:40.800808   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tstat tag 0 fid 2
I1009 18:40:40.800869   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa070 ca4675ad '') m 644 at 0 mt 1760035239 l 24 t 0 d 0 ext )
I1009 18:40:40.800972   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tclunk tag 0 fid 2
I1009 18:40:40.801001   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rclunk tag 0
I1009 18:40:40.801168   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Twalk tag 0 fid 0 newfid 2 0:'test-1760035239342767103' 
I1009 18:40:40.801198   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rwalk tag 0 (20fa072 ca4675ad '') 
I1009 18:40:40.801301   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tstat tag 0 fid 2
I1009 18:40:40.801377   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rstat tag 0 st ('test-1760035239342767103' 'jenkins' 'balintp' '' q (20fa072 ca4675ad '') m 644 at 0 mt 1760035239 l 24 t 0 d 0 ext )
I1009 18:40:40.801487   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tstat tag 0 fid 2
I1009 18:40:40.801583   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rstat tag 0 st ('test-1760035239342767103' 'jenkins' 'balintp' '' q (20fa072 ca4675ad '') m 644 at 0 mt 1760035239 l 24 t 0 d 0 ext )
I1009 18:40:40.801681   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tclunk tag 0 fid 2
I1009 18:40:40.801703   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rclunk tag 0
I1009 18:40:40.801865   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tread tag 0 fid 1 offset 258 count 262120
I1009 18:40:40.801906   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rread tag 0 count 0
I1009 18:40:40.802084   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tclunk tag 0 fid 1
I1009 18:40:40.802116   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rclunk tag 0
I1009 18:40:41.086806   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Twalk tag 0 fid 0 newfid 1 0:'test-1760035239342767103' 
I1009 18:40:41.086863   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rwalk tag 0 (20fa072 ca4675ad '') 
I1009 18:40:41.087033   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tstat tag 0 fid 1
I1009 18:40:41.087211   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rstat tag 0 st ('test-1760035239342767103' 'jenkins' 'balintp' '' q (20fa072 ca4675ad '') m 644 at 0 mt 1760035239 l 24 t 0 d 0 ext )
I1009 18:40:41.087365   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Twalk tag 0 fid 1 newfid 2 
I1009 18:40:41.087405   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rwalk tag 0 
I1009 18:40:41.087512   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Topen tag 0 fid 2 mode 0
I1009 18:40:41.087578   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Ropen tag 0 qid (20fa072 ca4675ad '') iounit 0
I1009 18:40:41.087708   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tstat tag 0 fid 1
I1009 18:40:41.087790   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rstat tag 0 st ('test-1760035239342767103' 'jenkins' 'balintp' '' q (20fa072 ca4675ad '') m 644 at 0 mt 1760035239 l 24 t 0 d 0 ext )
I1009 18:40:41.088078   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tread tag 0 fid 2 offset 0 count 24
I1009 18:40:41.088155   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rread tag 0 count 24
I1009 18:40:41.088393   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tclunk tag 0 fid 2
I1009 18:40:41.088429   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rclunk tag 0
I1009 18:40:41.088584   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tclunk tag 0 fid 1
I1009 18:40:41.088618   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rclunk tag 0
I1009 18:40:41.434491   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tstat tag 0 fid 0
I1009 18:40:41.434622   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa06f ca4675ad 'd') m d775 at 0 mt 1760035239 l 4096 t 0 d 0 ext )
I1009 18:40:41.434950   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Twalk tag 0 fid 0 newfid 1 
I1009 18:40:41.434994   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rwalk tag 0 
I1009 18:40:41.435145   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Topen tag 0 fid 1 mode 0
I1009 18:40:41.435201   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Ropen tag 0 qid (20fa06f ca4675ad 'd') iounit 0
I1009 18:40:41.435334   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tstat tag 0 fid 0
I1009 18:40:41.435445   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa06f ca4675ad 'd') m d775 at 0 mt 1760035239 l 4096 t 0 d 0 ext )
I1009 18:40:41.435714   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tread tag 0 fid 1 offset 0 count 262120
I1009 18:40:41.435879   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rread tag 0 count 258
I1009 18:40:41.436072   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tread tag 0 fid 1 offset 258 count 261862
I1009 18:40:41.436115   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rread tag 0 count 0
I1009 18:40:41.436356   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tread tag 0 fid 1 offset 258 count 262120
I1009 18:40:41.436396   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rread tag 0 count 0
I1009 18:40:41.436563   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1009 18:40:41.436615   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rwalk tag 0 (20fa071 ca4675ad '') 
I1009 18:40:41.436736   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tstat tag 0 fid 2
I1009 18:40:41.436813   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa071 ca4675ad '') m 644 at 0 mt 1760035239 l 24 t 0 d 0 ext )
I1009 18:40:41.436933   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tstat tag 0 fid 2
I1009 18:40:41.437033   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa071 ca4675ad '') m 644 at 0 mt 1760035239 l 24 t 0 d 0 ext )
I1009 18:40:41.437197   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tclunk tag 0 fid 2
I1009 18:40:41.437226   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rclunk tag 0
I1009 18:40:41.437379   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1009 18:40:41.437421   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rwalk tag 0 (20fa070 ca4675ad '') 
I1009 18:40:41.437520   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tstat tag 0 fid 2
I1009 18:40:41.437605   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa070 ca4675ad '') m 644 at 0 mt 1760035239 l 24 t 0 d 0 ext )
I1009 18:40:41.437716   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tstat tag 0 fid 2
I1009 18:40:41.437800   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa070 ca4675ad '') m 644 at 0 mt 1760035239 l 24 t 0 d 0 ext )
I1009 18:40:41.437931   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tclunk tag 0 fid 2
I1009 18:40:41.437953   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rclunk tag 0
I1009 18:40:41.438081   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Twalk tag 0 fid 0 newfid 2 0:'test-1760035239342767103' 
I1009 18:40:41.438120   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rwalk tag 0 (20fa072 ca4675ad '') 
I1009 18:40:41.438221   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tstat tag 0 fid 2
I1009 18:40:41.438287   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rstat tag 0 st ('test-1760035239342767103' 'jenkins' 'balintp' '' q (20fa072 ca4675ad '') m 644 at 0 mt 1760035239 l 24 t 0 d 0 ext )
I1009 18:40:41.438405   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tstat tag 0 fid 2
I1009 18:40:41.438478   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rstat tag 0 st ('test-1760035239342767103' 'jenkins' 'balintp' '' q (20fa072 ca4675ad '') m 644 at 0 mt 1760035239 l 24 t 0 d 0 ext )
I1009 18:40:41.438578   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tclunk tag 0 fid 2
I1009 18:40:41.438598   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rclunk tag 0
I1009 18:40:41.438727   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tread tag 0 fid 1 offset 258 count 262120
I1009 18:40:41.438763   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rread tag 0 count 0
I1009 18:40:41.438880   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tclunk tag 0 fid 1
I1009 18:40:41.438906   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rclunk tag 0
I1009 18:40:41.440113   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1009 18:40:41.440176   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rerror tag 0 ename 'file not found' ecode 0
I1009 18:40:41.724119   58780 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:52122 Tclunk tag 0 fid 0
I1009 18:40:41.724201   58780 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:52122 Rclunk tag 0
I1009 18:40:41.724562   58780 main.go:125] stdlog: ufs.go:147 disconnected
I1009 18:40:41.743345   58780 out.go:179] * Unmounting /mount-9p ...
I1009 18:40:41.744870   58780 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1009 18:40:41.752982   58780 mount.go:180] unmount for /mount-9p ran successfully
I1009 18:40:41.753149   58780 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/.mount-process: {Name:mk9244562022331fd6788abe3449547b8ef78764 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1009 18:40:41.755426   58780 out.go:203] 
W1009 18:40:41.757209   58780 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1009 18:40:41.759269   58780 out.go:203] 
--- FAIL: TestFunctional/parallel/MountCmd/any-port (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 image load --daemon kicbase/echo-server:functional-753440 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-753440" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 image load --daemon kicbase/echo-server:functional-753440 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-753440" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-753440
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 image load --daemon kicbase/echo-server:functional-753440 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-753440" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 image save kicbase/echo-server:functional-753440 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1009 18:40:45.973484   62364 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:40:45.973763   62364 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:40:45.973773   62364 out.go:374] Setting ErrFile to fd 2...
	I1009 18:40:45.973778   62364 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:40:45.974022   62364 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:40:45.974712   62364 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:40:45.974824   62364 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:40:45.975237   62364 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
	I1009 18:40:45.995907   62364 ssh_runner.go:195] Run: systemctl --version
	I1009 18:40:45.995954   62364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
	I1009 18:40:46.015438   62364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
	I1009 18:40:46.119749   62364 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1009 18:40:46.119837   62364 cache_images.go:254] Failed to load cached images for "functional-753440": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1009 18:40:46.119866   62364 cache_images.go:266] failed pushing to: functional-753440

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-753440
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 image save --daemon kicbase/echo-server:functional-753440 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-753440
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-753440: exit status 1 (19.36806ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-753440

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-753440

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (502.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1009 18:45:34.618099   14880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:45:34.624532   14880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:45:34.635882   14880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:45:34.657212   14880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:45:34.698646   14880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:45:34.780163   14880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:45:34.941790   14880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:45:35.263556   14880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:45:35.905609   14880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:45:37.187294   14880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:45:39.750223   14880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:45:44.871784   14880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:45:55.113500   14880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:46:15.595485   14880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:46:56.557805   14880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:48:18.482677   14880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:50:34.618281   14880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 18:51:02.331745   14880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 80 (8m20.800285961s)

                                                
                                                
-- stdout --
	* [ha-608611] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "ha-608611" primary control-plane node in "ha-608611" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:44:38.499708   68004 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:44:38.499979   68004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:44:38.499990   68004 out.go:374] Setting ErrFile to fd 2...
	I1009 18:44:38.499995   68004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:44:38.500193   68004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:44:38.500672   68004 out.go:368] Setting JSON to false
	I1009 18:44:38.501534   68004 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5226,"bootTime":1760030252,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:44:38.501651   68004 start.go:141] virtualization: kvm guest
	I1009 18:44:38.503753   68004 out.go:179] * [ha-608611] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:44:38.505161   68004 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:44:38.505174   68004 notify.go:220] Checking for updates...
	I1009 18:44:38.507971   68004 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:44:38.509361   68004 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:44:38.510823   68004 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:44:38.512241   68004 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:44:38.513815   68004 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:44:38.515465   68004 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:44:38.539241   68004 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:44:38.539344   68004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:44:38.597491   68004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:44:38.585969456 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:44:38.597607   68004 docker.go:318] overlay module found
	I1009 18:44:38.599712   68004 out.go:179] * Using the docker driver based on user configuration
	I1009 18:44:38.601190   68004 start.go:305] selected driver: docker
	I1009 18:44:38.601208   68004 start.go:925] validating driver "docker" against <nil>
	I1009 18:44:38.601220   68004 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:44:38.601773   68004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:44:38.656624   68004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:44:38.646723999 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:44:38.656772   68004 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 18:44:38.656973   68004 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:44:38.658777   68004 out.go:179] * Using Docker driver with root privileges
	I1009 18:44:38.660475   68004 cni.go:84] Creating CNI manager for ""
	I1009 18:44:38.660538   68004 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 18:44:38.660548   68004 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:44:38.660625   68004 start.go:349] cluster config:
	{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1009 18:44:38.662228   68004 out.go:179] * Starting "ha-608611" primary control-plane node in "ha-608611" cluster
	I1009 18:44:38.663758   68004 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:44:38.665163   68004 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:44:38.666518   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:38.666553   68004 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:44:38.666561   68004 cache.go:64] Caching tarball of preloaded images
	I1009 18:44:38.666652   68004 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:44:38.666665   68004 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:44:38.666636   68004 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:44:38.667052   68004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:44:38.667080   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json: {Name:mk7eb36c0f629760ce25ed6ea0be36fe97501d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:38.687956   68004 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 18:44:38.687977   68004 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 18:44:38.687999   68004 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:44:38.688029   68004 start.go:360] acquireMachinesLock for ha-608611: {Name:mk7579977ab708dc80cadd5f1683dbd9d0a08d4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:44:38.688196   68004 start.go:364] duration metric: took 118.358µs to acquireMachinesLock for "ha-608611"
	I1009 18:44:38.688228   68004 start.go:93] Provisioning new machine with config: &{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:44:38.688308   68004 start.go:125] createHost starting for "" (driver="docker")
	I1009 18:44:38.690596   68004 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 18:44:38.690877   68004 start.go:159] libmachine.API.Create for "ha-608611" (driver="docker")
	I1009 18:44:38.690915   68004 client.go:168] LocalClient.Create starting
	I1009 18:44:38.691016   68004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem
	I1009 18:44:38.691065   68004 main.go:141] libmachine: Decoding PEM data...
	I1009 18:44:38.691090   68004 main.go:141] libmachine: Parsing certificate...
	I1009 18:44:38.691160   68004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem
	I1009 18:44:38.691207   68004 main.go:141] libmachine: Decoding PEM data...
	I1009 18:44:38.691219   68004 main.go:141] libmachine: Parsing certificate...
	I1009 18:44:38.691649   68004 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:44:38.708961   68004 cli_runner.go:211] docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:44:38.709049   68004 network_create.go:284] running [docker network inspect ha-608611] to gather additional debugging logs...
	I1009 18:44:38.709068   68004 cli_runner.go:164] Run: docker network inspect ha-608611
	W1009 18:44:38.724919   68004 cli_runner.go:211] docker network inspect ha-608611 returned with exit code 1
	I1009 18:44:38.724948   68004 network_create.go:287] error running [docker network inspect ha-608611]: docker network inspect ha-608611: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-608611 not found
	I1009 18:44:38.724959   68004 network_create.go:289] output of [docker network inspect ha-608611]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-608611 not found
	
	** /stderr **
	I1009 18:44:38.725077   68004 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:44:38.743440   68004 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e06100}
	I1009 18:44:38.743492   68004 network_create.go:124] attempt to create docker network ha-608611 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 18:44:38.743548   68004 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-608611 ha-608611
	I1009 18:44:38.802772   68004 network_create.go:108] docker network ha-608611 192.168.49.0/24 created
	I1009 18:44:38.802822   68004 kic.go:121] calculated static IP "192.168.49.2" for the "ha-608611" container
	I1009 18:44:38.802881   68004 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:44:38.820080   68004 cli_runner.go:164] Run: docker volume create ha-608611 --label name.minikube.sigs.k8s.io=ha-608611 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:44:38.840522   68004 oci.go:103] Successfully created a docker volume ha-608611
	I1009 18:44:38.840615   68004 cli_runner.go:164] Run: docker run --rm --name ha-608611-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-608611 --entrypoint /usr/bin/test -v ha-608611:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 18:44:39.244353   68004 oci.go:107] Successfully prepared a docker volume ha-608611
	I1009 18:44:39.244424   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:39.244433   68004 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 18:44:39.244478   68004 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-608611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 18:44:43.640122   68004 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-608611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.39557595s)
	I1009 18:44:43.640175   68004 kic.go:203] duration metric: took 4.395736393s to extract preloaded images to volume ...
	W1009 18:44:43.640303   68004 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 18:44:43.640358   68004 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 18:44:43.640405   68004 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:44:43.696295   68004 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-608611 --name ha-608611 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-608611 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-608611 --network ha-608611 --ip 192.168.49.2 --volume ha-608611:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 18:44:43.979679   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Running}}
	I1009 18:44:43.998229   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.017435   68004 cli_runner.go:164] Run: docker exec ha-608611 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:44:44.066674   68004 oci.go:144] the created container "ha-608611" has a running status.
	I1009 18:44:44.066704   68004 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa...
	I1009 18:44:44.380025   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 18:44:44.380087   68004 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:44:44.405345   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.425476   68004 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:44:44.425501   68004 kic_runner.go:114] Args: [docker exec --privileged ha-608611 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:44:44.469260   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.488635   68004 machine.go:93] provisionDockerMachine start ...
	I1009 18:44:44.488729   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.507225   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.507570   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.507596   68004 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:44:44.655038   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:44:44.655067   68004 ubuntu.go:182] provisioning hostname "ha-608611"
	I1009 18:44:44.655128   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.673982   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.674208   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.674222   68004 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-608611 && echo "ha-608611" | sudo tee /etc/hostname
	I1009 18:44:44.830321   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:44:44.830415   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.848252   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.848464   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.848481   68004 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-608611' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-608611/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-608611' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:44:44.995953   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:44:44.995980   68004 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 18:44:44.995996   68004 ubuntu.go:190] setting up certificates
	I1009 18:44:44.996004   68004 provision.go:84] configureAuth start
	I1009 18:44:44.996061   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.014319   68004 provision.go:143] copyHostCerts
	I1009 18:44:45.014359   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:44:45.014401   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 18:44:45.014411   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:44:45.014491   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 18:44:45.014585   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:44:45.014614   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 18:44:45.014624   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:44:45.014668   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 18:44:45.014744   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:44:45.014769   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 18:44:45.014773   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:44:45.014812   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 18:44:45.014890   68004 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.ha-608611 san=[127.0.0.1 192.168.49.2 ha-608611 localhost minikube]
	I1009 18:44:45.062086   68004 provision.go:177] copyRemoteCerts
	I1009 18:44:45.062191   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:44:45.062224   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.079568   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.182503   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 18:44:45.182590   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 18:44:45.201898   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 18:44:45.201952   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 18:44:45.219004   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 18:44:45.219061   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:44:45.236354   68004 provision.go:87] duration metric: took 240.321663ms to configureAuth
	I1009 18:44:45.236386   68004 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:44:45.236591   68004 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:44:45.236715   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.255084   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:45.255329   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:45.255352   68004 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:44:45.508555   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:44:45.508584   68004 machine.go:96] duration metric: took 1.01992839s to provisionDockerMachine
	I1009 18:44:45.508595   68004 client.go:171] duration metric: took 6.817674141s to LocalClient.Create
	I1009 18:44:45.508615   68004 start.go:167] duration metric: took 6.817737923s to libmachine.API.Create "ha-608611"
	I1009 18:44:45.508627   68004 start.go:293] postStartSetup for "ha-608611" (driver="docker")
	I1009 18:44:45.508641   68004 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:44:45.508698   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:44:45.508733   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.526223   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.630313   68004 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:44:45.633862   68004 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:44:45.633886   68004 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:44:45.633896   68004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 18:44:45.633937   68004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 18:44:45.634010   68004 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 18:44:45.634020   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /etc/ssl/certs/148802.pem
	I1009 18:44:45.634128   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:44:45.641735   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:44:45.661588   68004 start.go:296] duration metric: took 152.943683ms for postStartSetup
	I1009 18:44:45.661893   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.680048   68004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:44:45.680316   68004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:44:45.680352   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.696877   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.796243   68004 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:44:45.800700   68004 start.go:128] duration metric: took 7.112375109s to createHost
	I1009 18:44:45.800729   68004 start.go:83] releasing machines lock for "ha-608611", held for 7.112518345s
	I1009 18:44:45.800791   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.818595   68004 ssh_runner.go:195] Run: cat /version.json
	I1009 18:44:45.818630   68004 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:44:45.818641   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.818688   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.836603   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.836837   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.989177   68004 ssh_runner.go:195] Run: systemctl --version
	I1009 18:44:45.995896   68004 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:44:46.030619   68004 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:44:46.035429   68004 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:44:46.035494   68004 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:44:46.061922   68004 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 18:44:46.061944   68004 start.go:495] detecting cgroup driver to use...
	I1009 18:44:46.061975   68004 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:44:46.062026   68004 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:44:46.077423   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:44:46.089316   68004 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:44:46.089367   68004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:44:46.105696   68004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:44:46.122777   68004 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:44:46.202639   68004 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:44:46.294647   68004 docker.go:234] disabling docker service ...
	I1009 18:44:46.294704   68004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:44:46.312549   68004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:44:46.324800   68004 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:44:46.403433   68004 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:44:46.481222   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:44:46.493645   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:44:46.507931   68004 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:44:46.507979   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.518504   68004 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 18:44:46.518561   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.527328   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.535888   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.544437   68004 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:44:46.552112   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.560275   68004 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.573155   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.581642   68004 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:44:46.588485   68004 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:44:46.595486   68004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:44:46.674187   68004 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:44:46.778236   68004 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:44:46.778294   68004 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:44:46.782264   68004 start.go:563] Will wait 60s for crictl version
	I1009 18:44:46.782319   68004 ssh_runner.go:195] Run: which crictl
	I1009 18:44:46.785887   68004 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:44:46.809717   68004 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:44:46.809792   68004 ssh_runner.go:195] Run: crio --version
	I1009 18:44:46.837446   68004 ssh_runner.go:195] Run: crio --version
	I1009 18:44:46.867516   68004 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:44:46.869002   68004 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:44:46.886298   68004 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:44:46.890354   68004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:44:46.901206   68004 kubeadm.go:883] updating cluster {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:44:46.901331   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:46.901390   68004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:44:46.933183   68004 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:44:46.933203   68004 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:44:46.933255   68004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:44:46.959025   68004 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:44:46.959053   68004 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:44:46.959062   68004 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 18:44:46.959174   68004 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-608611 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:44:46.959248   68004 ssh_runner.go:195] Run: crio config
	I1009 18:44:47.005223   68004 cni.go:84] Creating CNI manager for ""
	I1009 18:44:47.005245   68004 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 18:44:47.005269   68004 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:44:47.005302   68004 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-608611 NodeName:ha-608611 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:44:47.005420   68004 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-608611"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:44:47.005441   68004 kube-vip.go:115] generating kube-vip config ...
	I1009 18:44:47.005483   68004 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 18:44:47.017646   68004 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:44:47.017751   68004 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1009 18:44:47.017813   68004 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:44:47.025763   68004 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:44:47.025815   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 18:44:47.033769   68004 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 18:44:47.046390   68004 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:44:47.062352   68004 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 18:44:47.075248   68004 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1009 18:44:47.090154   68004 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 18:44:47.093985   68004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:44:47.104234   68004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:44:47.185443   68004 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:44:47.207477   68004 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611 for IP: 192.168.49.2
	I1009 18:44:47.207503   68004 certs.go:195] generating shared ca certs ...
	I1009 18:44:47.207525   68004 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.207676   68004 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 18:44:47.207726   68004 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 18:44:47.207736   68004 certs.go:257] generating profile certs ...
	I1009 18:44:47.207784   68004 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key
	I1009 18:44:47.207802   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt with IP's: []
	I1009 18:44:47.296415   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt ...
	I1009 18:44:47.296444   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt: {Name:mka7495c49ff81b322387640c5f8be05bb8b97aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.296615   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key ...
	I1009 18:44:47.296627   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key: {Name:mk151a9783426d352762013576861912ee213cd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.296698   68004 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3
	I1009 18:44:47.296712   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1009 18:44:47.614912   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 ...
	I1009 18:44:47.614937   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3: {Name:mkf40b70da82ca6969886952002da4a653b30ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.615095   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3 ...
	I1009 18:44:47.615110   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3: {Name:mkd83b705c3cec74b71d7424d9484d8c52a44a8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.615192   68004 certs.go:382] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt
	I1009 18:44:47.615283   68004 certs.go:386] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3 -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key
	I1009 18:44:47.615388   68004 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key
	I1009 18:44:47.615408   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt with IP's: []
	I1009 18:44:47.855559   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt ...
	I1009 18:44:47.855590   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt: {Name:mkb45be1e91a0e10b00b60bd353288b3ec0a365b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.855750   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key ...
	I1009 18:44:47.855762   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key: {Name:mk173c05f4fc9659f1f76c6f2e2f3e956fd65bbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.855826   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 18:44:47.855839   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 18:44:47.855850   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 18:44:47.855863   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 18:44:47.855878   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 18:44:47.855890   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 18:44:47.855902   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 18:44:47.855914   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 18:44:47.855955   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 18:44:47.855989   68004 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 18:44:47.855998   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:44:47.856027   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 18:44:47.856050   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:44:47.856071   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 18:44:47.856108   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:44:47.856132   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:47.856159   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem -> /usr/share/ca-certificates/14880.pem
	I1009 18:44:47.856171   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /usr/share/ca-certificates/148802.pem
	I1009 18:44:47.856652   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:44:47.875170   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:44:47.892939   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:44:47.910593   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:44:47.927971   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 18:44:47.945367   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:44:47.962453   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:44:47.979768   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:44:47.996498   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:44:48.015667   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 18:44:48.032775   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 18:44:48.049777   68004 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:44:48.062232   68004 ssh_runner.go:195] Run: openssl version
	I1009 18:44:48.068333   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 18:44:48.076746   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.080306   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.080361   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.114497   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:44:48.123987   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:44:48.134109   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.138265   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.138325   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.173947   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:44:48.182505   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 18:44:48.190879   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.194449   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.194493   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.227813   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 18:44:48.236520   68004 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:44:48.239954   68004 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 18:44:48.240015   68004 kubeadm.go:400] StartCluster: {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:44:48.240093   68004 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:44:48.240133   68004 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:44:48.266457   68004 cri.go:89] found id: ""
	I1009 18:44:48.266520   68004 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:44:48.274981   68004 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:44:48.282927   68004 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:44:48.282975   68004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:44:48.290558   68004 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:44:48.290617   68004 kubeadm.go:157] found existing configuration files:
	
	I1009 18:44:48.290662   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:44:48.297883   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:44:48.297940   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:44:48.305298   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:44:48.312630   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:44:48.312685   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:44:48.320277   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:44:48.328028   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:44:48.328075   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:44:48.335714   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:44:48.343631   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:44:48.343682   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:44:48.351389   68004 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:44:48.409985   68004 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:44:48.468687   68004 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:48:52.176412   68004 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1009 18:48:52.176606   68004 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:48:52.179343   68004 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:48:52.179469   68004 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:48:52.179692   68004 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:48:52.179825   68004 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:48:52.179919   68004 kubeadm.go:318] OS: Linux
	I1009 18:48:52.180033   68004 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:48:52.180167   68004 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:48:52.180261   68004 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:48:52.180339   68004 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:48:52.180423   68004 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:48:52.180506   68004 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:48:52.180585   68004 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:48:52.180650   68004 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:48:52.180730   68004 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:48:52.180858   68004 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:48:52.181038   68004 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:48:52.181129   68004 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:48:52.183066   68004 out.go:252]   - Generating certificates and keys ...
	I1009 18:48:52.183199   68004 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:48:52.183278   68004 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:48:52.183337   68004 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 18:48:52.183388   68004 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 18:48:52.183456   68004 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 18:48:52.183531   68004 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 18:48:52.183609   68004 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 18:48:52.183734   68004 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:48:52.183814   68004 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 18:48:52.183946   68004 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:48:52.184022   68004 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 18:48:52.184077   68004 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 18:48:52.184120   68004 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 18:48:52.184209   68004 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:48:52.184289   68004 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:48:52.184373   68004 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:48:52.184446   68004 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:48:52.184545   68004 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:48:52.184650   68004 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:48:52.184751   68004 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:48:52.184845   68004 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:48:52.187212   68004 out.go:252]   - Booting up control plane ...
	I1009 18:48:52.187314   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:48:52.187403   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:48:52.187495   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:48:52.187618   68004 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:48:52.187764   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:48:52.187905   68004 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:48:52.188016   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:48:52.188092   68004 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:48:52.188271   68004 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:48:52.188367   68004 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:48:52.188438   68004 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001064091s
	I1009 18:48:52.188532   68004 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:48:52.188631   68004 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 18:48:52.188753   68004 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:48:52.188835   68004 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:48:52.188944   68004 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00065849s
	I1009 18:48:52.189053   68004 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000822023s
	I1009 18:48:52.189176   68004 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00103559s
	I1009 18:48:52.189186   68004 kubeadm.go:318] 
	I1009 18:48:52.189288   68004 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:48:52.189417   68004 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:48:52.189507   68004 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:48:52.189604   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:48:52.189710   68004 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:48:52.189827   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:48:52.189851   68004 kubeadm.go:318] 
	W1009 18:48:52.189997   68004 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001064091s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00065849s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000822023s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00103559s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001064091s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00065849s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000822023s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00103559s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 18:48:52.190074   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 18:48:54.957990   68004 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.767888592s)
	I1009 18:48:54.958062   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:48:54.971165   68004 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:48:54.971216   68004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:48:54.979630   68004 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:48:54.979649   68004 kubeadm.go:157] found existing configuration files:
	
	I1009 18:48:54.979696   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:48:54.987819   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:48:54.987884   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:48:54.995953   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:48:55.003976   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:48:55.004081   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:48:55.011851   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:48:55.019991   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:48:55.020043   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:48:55.027959   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:48:55.036070   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:48:55.036117   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:48:55.043823   68004 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:48:55.102132   68004 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:48:55.161990   68004 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:52:58.820119   68004 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 18:52:58.820247   68004 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:52:58.823463   68004 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:52:58.823551   68004 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:52:58.823686   68004 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:52:58.823770   68004 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:52:58.823834   68004 kubeadm.go:318] OS: Linux
	I1009 18:52:58.823882   68004 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:52:58.823967   68004 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:52:58.824039   68004 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:52:58.824112   68004 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:52:58.824209   68004 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:52:58.824278   68004 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:52:58.824339   68004 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:52:58.824385   68004 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:52:58.824446   68004 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:52:58.824525   68004 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:52:58.824621   68004 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:52:58.824718   68004 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:52:58.828177   68004 out.go:252]   - Generating certificates and keys ...
	I1009 18:52:58.828267   68004 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:52:58.828359   68004 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:52:58.828476   68004 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 18:52:58.828530   68004 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 18:52:58.828586   68004 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 18:52:58.828629   68004 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 18:52:58.828684   68004 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 18:52:58.828737   68004 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 18:52:58.828800   68004 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 18:52:58.828859   68004 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 18:52:58.828890   68004 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 18:52:58.828973   68004 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:52:58.829058   68004 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:52:58.829168   68004 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:52:58.829228   68004 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:52:58.829307   68004 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:52:58.829375   68004 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:52:58.829446   68004 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:52:58.829507   68004 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:52:58.830918   68004 out.go:252]   - Booting up control plane ...
	I1009 18:52:58.831004   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:52:58.831088   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:52:58.831162   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:52:58.831271   68004 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:52:58.831374   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:52:58.831475   68004 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:52:58.831547   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:52:58.831602   68004 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:52:58.831715   68004 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:52:58.831812   68004 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:52:58.831876   68004 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000946171s
	I1009 18:52:58.831960   68004 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:52:58.832028   68004 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 18:52:58.832113   68004 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:52:58.832207   68004 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:52:58.832277   68004 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	I1009 18:52:58.832347   68004 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	I1009 18:52:58.832422   68004 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	I1009 18:52:58.832428   68004 kubeadm.go:318] 
	I1009 18:52:58.832506   68004 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:52:58.832579   68004 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:52:58.832656   68004 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:52:58.832741   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:52:58.832805   68004 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:52:58.832888   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:52:58.832970   68004 kubeadm.go:402] duration metric: took 8m10.592960723s to StartCluster
	I1009 18:52:58.832981   68004 kubeadm.go:318] 
	I1009 18:52:58.833031   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:52:58.833085   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:52:58.861225   68004 cri.go:89] found id: ""
	I1009 18:52:58.861266   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.861281   68004 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:52:58.861287   68004 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:52:58.861341   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:52:58.888167   68004 cri.go:89] found id: ""
	I1009 18:52:58.888195   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.888205   68004 logs.go:284] No container was found matching "etcd"
	I1009 18:52:58.888212   68004 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:52:58.888287   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:52:58.914349   68004 cri.go:89] found id: ""
	I1009 18:52:58.914374   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.914384   68004 logs.go:284] No container was found matching "coredns"
	I1009 18:52:58.914390   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:52:58.914453   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:52:58.940856   68004 cri.go:89] found id: ""
	I1009 18:52:58.940884   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.940892   68004 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:52:58.940898   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:52:58.940949   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:52:58.967634   68004 cri.go:89] found id: ""
	I1009 18:52:58.967660   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.967668   68004 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:52:58.967675   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:52:58.967737   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:52:58.994857   68004 cri.go:89] found id: ""
	I1009 18:52:58.994884   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.994892   68004 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:52:58.994897   68004 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:52:58.994951   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:52:59.022250   68004 cri.go:89] found id: ""
	I1009 18:52:59.022280   68004 logs.go:282] 0 containers: []
	W1009 18:52:59.022296   68004 logs.go:284] No container was found matching "kindnet"
	I1009 18:52:59.022305   68004 logs.go:123] Gathering logs for container status ...
	I1009 18:52:59.022316   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:52:59.050362   68004 logs.go:123] Gathering logs for kubelet ...
	I1009 18:52:59.050466   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:52:59.114521   68004 logs.go:123] Gathering logs for dmesg ...
	I1009 18:52:59.114560   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:52:59.126721   68004 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:52:59.126746   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:52:59.184497   68004 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:52:59.177217    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.177807    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179451    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179888    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.181458    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:52:59.177217    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.177807    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179451    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179888    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.181458    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:52:59.184526   68004 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:52:59.184536   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1009 18:52:59.243650   68004 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 18:52:59.243716   68004 out.go:285] * 
	* 
	W1009 18:52:59.243784   68004 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:52:59.243799   68004 out.go:285] * 
	* 
	W1009 18:52:59.245479   68004 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:52:59.249165   68004 out.go:203] 
	W1009 18:52:59.250590   68004 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:52:59.250620   68004 out.go:285] * 
	* 
	I1009 18:52:59.252112   68004 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-608611 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-608611
helpers_test.go:243: (dbg) docker inspect ha-608611:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	        "Created": "2025-10-09T18:44:43.71277862Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 68571,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:44:43.760299717Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hostname",
	        "HostsPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hosts",
	        "LogPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c-json.log",
	        "Name": "/ha-608611",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-608611:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-608611",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	                "LowerDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-608611",
	                "Source": "/var/lib/docker/volumes/ha-608611/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-608611",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-608611",
	                "name.minikube.sigs.k8s.io": "ha-608611",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4f6557069285c9379d4788b404b85a7f7332b0f0915fb426eb2d3ffb6f02df65",
	            "SandboxKey": "/var/run/docker/netns/4f6557069285",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-608611": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:dc:55:21:78:3f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d41ad8abecfe5e57fea462a2d7f6665aa3879de8bfc3fe0269f712186c14e257",
	                    "EndpointID": "322add21e309d24bef79b6b7f428ea8a1994c3d46e02d36bb4debf9950e6c0a5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-608611",
	                        "92fc23109156"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611: exit status 6 (295.026233ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:52:59.601862   73139 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/StartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/StartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                           ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-753440 ssh findmnt -T /mount3                                                                                  │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image          │ functional-753440 image ls                                                                                                │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ mount          │ -p functional-753440 --kill=true                                                                                          │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ image          │ functional-753440 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image          │ functional-753440 image save --daemon kicbase/echo-server:functional-753440 --alsologtostderr                             │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh            │ functional-753440 ssh sudo cat /etc/ssl/certs/14880.pem                                                                   │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh            │ functional-753440 ssh sudo cat /usr/share/ca-certificates/14880.pem                                                       │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh            │ functional-753440 ssh sudo cat /etc/ssl/certs/51391683.0                                                                  │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh            │ functional-753440 ssh sudo cat /etc/ssl/certs/148802.pem                                                                  │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh            │ functional-753440 ssh sudo cat /usr/share/ca-certificates/148802.pem                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh            │ functional-753440 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                  │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh            │ functional-753440 ssh sudo cat /etc/test/nested/copy/14880/hosts                                                          │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ dashboard      │ --url --port 36195 -p functional-753440 --alsologtostderr -v=1                                                            │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ image          │ functional-753440 image ls --format short --alsologtostderr                                                               │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image          │ functional-753440 image ls --format yaml --alsologtostderr                                                                │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ ssh            │ functional-753440 ssh pgrep buildkitd                                                                                     │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │                     │
	│ image          │ functional-753440 image build -t localhost/my-image:functional-753440 testdata/build --alsologtostderr                    │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image          │ functional-753440 image ls --format json --alsologtostderr                                                                │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image          │ functional-753440 image ls --format table --alsologtostderr                                                               │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                                   │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                                   │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                                   │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image          │ functional-753440 image ls                                                                                                │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ delete         │ -p functional-753440                                                                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:44 UTC │ 09 Oct 25 18:44 UTC │
	│ start          │ ha-608611 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio           │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:44 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:44:38
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:44:38.499708   68004 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:44:38.499979   68004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:44:38.499990   68004 out.go:374] Setting ErrFile to fd 2...
	I1009 18:44:38.499995   68004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:44:38.500193   68004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:44:38.500672   68004 out.go:368] Setting JSON to false
	I1009 18:44:38.501534   68004 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5226,"bootTime":1760030252,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:44:38.501651   68004 start.go:141] virtualization: kvm guest
	I1009 18:44:38.503753   68004 out.go:179] * [ha-608611] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:44:38.505161   68004 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:44:38.505174   68004 notify.go:220] Checking for updates...
	I1009 18:44:38.507971   68004 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:44:38.509361   68004 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:44:38.510823   68004 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:44:38.512241   68004 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:44:38.513815   68004 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:44:38.515465   68004 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:44:38.539241   68004 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:44:38.539344   68004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:44:38.597491   68004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:44:38.585969456 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:44:38.597607   68004 docker.go:318] overlay module found
	I1009 18:44:38.599712   68004 out.go:179] * Using the docker driver based on user configuration
	I1009 18:44:38.601190   68004 start.go:305] selected driver: docker
	I1009 18:44:38.601208   68004 start.go:925] validating driver "docker" against <nil>
	I1009 18:44:38.601220   68004 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:44:38.601773   68004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:44:38.656624   68004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:44:38.646723999 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:44:38.656772   68004 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 18:44:38.656973   68004 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:44:38.658777   68004 out.go:179] * Using Docker driver with root privileges
	I1009 18:44:38.660475   68004 cni.go:84] Creating CNI manager for ""
	I1009 18:44:38.660538   68004 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 18:44:38.660548   68004 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:44:38.660625   68004 start.go:349] cluster config:
	{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1009 18:44:38.662228   68004 out.go:179] * Starting "ha-608611" primary control-plane node in "ha-608611" cluster
	I1009 18:44:38.663758   68004 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:44:38.665163   68004 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:44:38.666518   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:38.666553   68004 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:44:38.666561   68004 cache.go:64] Caching tarball of preloaded images
	I1009 18:44:38.666652   68004 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:44:38.666665   68004 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:44:38.666636   68004 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:44:38.667052   68004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:44:38.667080   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json: {Name:mk7eb36c0f629760ce25ed6ea0be36fe97501d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:38.687956   68004 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 18:44:38.687977   68004 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 18:44:38.687999   68004 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:44:38.688029   68004 start.go:360] acquireMachinesLock for ha-608611: {Name:mk7579977ab708dc80cadd5f1683dbd9d0a08d4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:44:38.688196   68004 start.go:364] duration metric: took 118.358µs to acquireMachinesLock for "ha-608611"
	I1009 18:44:38.688228   68004 start.go:93] Provisioning new machine with config: &{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:44:38.688308   68004 start.go:125] createHost starting for "" (driver="docker")
	I1009 18:44:38.690596   68004 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 18:44:38.690877   68004 start.go:159] libmachine.API.Create for "ha-608611" (driver="docker")
	I1009 18:44:38.690915   68004 client.go:168] LocalClient.Create starting
	I1009 18:44:38.691016   68004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem
	I1009 18:44:38.691065   68004 main.go:141] libmachine: Decoding PEM data...
	I1009 18:44:38.691090   68004 main.go:141] libmachine: Parsing certificate...
	I1009 18:44:38.691160   68004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem
	I1009 18:44:38.691207   68004 main.go:141] libmachine: Decoding PEM data...
	I1009 18:44:38.691219   68004 main.go:141] libmachine: Parsing certificate...
	I1009 18:44:38.691649   68004 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:44:38.708961   68004 cli_runner.go:211] docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:44:38.709049   68004 network_create.go:284] running [docker network inspect ha-608611] to gather additional debugging logs...
	I1009 18:44:38.709068   68004 cli_runner.go:164] Run: docker network inspect ha-608611
	W1009 18:44:38.724919   68004 cli_runner.go:211] docker network inspect ha-608611 returned with exit code 1
	I1009 18:44:38.724948   68004 network_create.go:287] error running [docker network inspect ha-608611]: docker network inspect ha-608611: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-608611 not found
	I1009 18:44:38.724959   68004 network_create.go:289] output of [docker network inspect ha-608611]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-608611 not found
	
	** /stderr **
	I1009 18:44:38.725077   68004 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:44:38.743440   68004 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e06100}
	I1009 18:44:38.743492   68004 network_create.go:124] attempt to create docker network ha-608611 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 18:44:38.743548   68004 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-608611 ha-608611
	I1009 18:44:38.802772   68004 network_create.go:108] docker network ha-608611 192.168.49.0/24 created
	I1009 18:44:38.802822   68004 kic.go:121] calculated static IP "192.168.49.2" for the "ha-608611" container
	I1009 18:44:38.802881   68004 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:44:38.820080   68004 cli_runner.go:164] Run: docker volume create ha-608611 --label name.minikube.sigs.k8s.io=ha-608611 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:44:38.840522   68004 oci.go:103] Successfully created a docker volume ha-608611
	I1009 18:44:38.840615   68004 cli_runner.go:164] Run: docker run --rm --name ha-608611-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-608611 --entrypoint /usr/bin/test -v ha-608611:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 18:44:39.244353   68004 oci.go:107] Successfully prepared a docker volume ha-608611
	I1009 18:44:39.244424   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:39.244433   68004 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 18:44:39.244478   68004 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-608611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 18:44:43.640122   68004 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-608611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.39557595s)
	I1009 18:44:43.640175   68004 kic.go:203] duration metric: took 4.395736393s to extract preloaded images to volume ...
	W1009 18:44:43.640303   68004 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 18:44:43.640358   68004 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 18:44:43.640405   68004 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:44:43.696295   68004 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-608611 --name ha-608611 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-608611 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-608611 --network ha-608611 --ip 192.168.49.2 --volume ha-608611:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 18:44:43.979679   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Running}}
	I1009 18:44:43.998229   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.017435   68004 cli_runner.go:164] Run: docker exec ha-608611 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:44:44.066674   68004 oci.go:144] the created container "ha-608611" has a running status.
	I1009 18:44:44.066704   68004 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa...
	I1009 18:44:44.380025   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 18:44:44.380087   68004 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:44:44.405345   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.425476   68004 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:44:44.425501   68004 kic_runner.go:114] Args: [docker exec --privileged ha-608611 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:44:44.469260   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.488635   68004 machine.go:93] provisionDockerMachine start ...
	I1009 18:44:44.488729   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.507225   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.507570   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.507596   68004 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:44:44.655038   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:44:44.655067   68004 ubuntu.go:182] provisioning hostname "ha-608611"
	I1009 18:44:44.655128   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.673982   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.674208   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.674222   68004 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-608611 && echo "ha-608611" | sudo tee /etc/hostname
	I1009 18:44:44.830321   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:44:44.830415   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.848252   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.848464   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.848481   68004 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-608611' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-608611/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-608611' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:44:44.995953   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:44:44.995980   68004 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 18:44:44.995996   68004 ubuntu.go:190] setting up certificates
	I1009 18:44:44.996004   68004 provision.go:84] configureAuth start
	I1009 18:44:44.996061   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.014319   68004 provision.go:143] copyHostCerts
	I1009 18:44:45.014359   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:44:45.014401   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 18:44:45.014411   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:44:45.014491   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 18:44:45.014585   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:44:45.014614   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 18:44:45.014624   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:44:45.014668   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 18:44:45.014744   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:44:45.014769   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 18:44:45.014773   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:44:45.014812   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 18:44:45.014890   68004 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.ha-608611 san=[127.0.0.1 192.168.49.2 ha-608611 localhost minikube]
	I1009 18:44:45.062086   68004 provision.go:177] copyRemoteCerts
	I1009 18:44:45.062191   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:44:45.062224   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.079568   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.182503   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 18:44:45.182590   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 18:44:45.201898   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 18:44:45.201952   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 18:44:45.219004   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 18:44:45.219061   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:44:45.236354   68004 provision.go:87] duration metric: took 240.321663ms to configureAuth
	I1009 18:44:45.236386   68004 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:44:45.236591   68004 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:44:45.236715   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.255084   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:45.255329   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:45.255352   68004 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:44:45.508555   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:44:45.508584   68004 machine.go:96] duration metric: took 1.01992839s to provisionDockerMachine
	I1009 18:44:45.508595   68004 client.go:171] duration metric: took 6.817674141s to LocalClient.Create
	I1009 18:44:45.508615   68004 start.go:167] duration metric: took 6.817737923s to libmachine.API.Create "ha-608611"
	I1009 18:44:45.508627   68004 start.go:293] postStartSetup for "ha-608611" (driver="docker")
	I1009 18:44:45.508641   68004 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:44:45.508698   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:44:45.508733   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.526223   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.630313   68004 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:44:45.633862   68004 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:44:45.633886   68004 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:44:45.633896   68004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 18:44:45.633937   68004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 18:44:45.634010   68004 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 18:44:45.634020   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /etc/ssl/certs/148802.pem
	I1009 18:44:45.634128   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:44:45.641735   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:44:45.661588   68004 start.go:296] duration metric: took 152.943683ms for postStartSetup
	I1009 18:44:45.661893   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.680048   68004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:44:45.680316   68004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:44:45.680352   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.696877   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.796243   68004 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:44:45.800700   68004 start.go:128] duration metric: took 7.112375109s to createHost
	I1009 18:44:45.800729   68004 start.go:83] releasing machines lock for "ha-608611", held for 7.112518345s
	I1009 18:44:45.800791   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.818595   68004 ssh_runner.go:195] Run: cat /version.json
	I1009 18:44:45.818630   68004 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:44:45.818641   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.818688   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.836603   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.836837   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.989177   68004 ssh_runner.go:195] Run: systemctl --version
	I1009 18:44:45.995896   68004 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:44:46.030619   68004 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:44:46.035429   68004 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:44:46.035494   68004 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:44:46.061922   68004 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 18:44:46.061944   68004 start.go:495] detecting cgroup driver to use...
	I1009 18:44:46.061975   68004 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:44:46.062026   68004 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:44:46.077423   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:44:46.089316   68004 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:44:46.089367   68004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:44:46.105696   68004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:44:46.122777   68004 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:44:46.202639   68004 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:44:46.294647   68004 docker.go:234] disabling docker service ...
	I1009 18:44:46.294704   68004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:44:46.312549   68004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:44:46.324800   68004 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:44:46.403433   68004 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:44:46.481222   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:44:46.493645   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:44:46.507931   68004 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:44:46.507979   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.518504   68004 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 18:44:46.518561   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.527328   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.535888   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.544437   68004 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:44:46.552112   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.560275   68004 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.573155   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.581642   68004 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:44:46.588485   68004 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:44:46.595486   68004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:44:46.674187   68004 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:44:46.778236   68004 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:44:46.778294   68004 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:44:46.782264   68004 start.go:563] Will wait 60s for crictl version
	I1009 18:44:46.782319   68004 ssh_runner.go:195] Run: which crictl
	I1009 18:44:46.785887   68004 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:44:46.809717   68004 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:44:46.809792   68004 ssh_runner.go:195] Run: crio --version
	I1009 18:44:46.837446   68004 ssh_runner.go:195] Run: crio --version
	I1009 18:44:46.867516   68004 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:44:46.869002   68004 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:44:46.886298   68004 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:44:46.890354   68004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:44:46.901206   68004 kubeadm.go:883] updating cluster {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:44:46.901331   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:46.901390   68004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:44:46.933183   68004 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:44:46.933203   68004 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:44:46.933255   68004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:44:46.959025   68004 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:44:46.959053   68004 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:44:46.959062   68004 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 18:44:46.959174   68004 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-608611 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:44:46.959248   68004 ssh_runner.go:195] Run: crio config
	I1009 18:44:47.005223   68004 cni.go:84] Creating CNI manager for ""
	I1009 18:44:47.005245   68004 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 18:44:47.005269   68004 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:44:47.005302   68004 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-608611 NodeName:ha-608611 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:44:47.005420   68004 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-608611"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:44:47.005441   68004 kube-vip.go:115] generating kube-vip config ...
	I1009 18:44:47.005483   68004 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 18:44:47.017646   68004 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:44:47.017751   68004 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1009 18:44:47.017813   68004 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:44:47.025763   68004 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:44:47.025815   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 18:44:47.033769   68004 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 18:44:47.046390   68004 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:44:47.062352   68004 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 18:44:47.075248   68004 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1009 18:44:47.090154   68004 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 18:44:47.093985   68004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:44:47.104234   68004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:44:47.185443   68004 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:44:47.207477   68004 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611 for IP: 192.168.49.2
	I1009 18:44:47.207503   68004 certs.go:195] generating shared ca certs ...
	I1009 18:44:47.207525   68004 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.207676   68004 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 18:44:47.207726   68004 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 18:44:47.207736   68004 certs.go:257] generating profile certs ...
	I1009 18:44:47.207784   68004 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key
	I1009 18:44:47.207802   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt with IP's: []
	I1009 18:44:47.296415   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt ...
	I1009 18:44:47.296444   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt: {Name:mka7495c49ff81b322387640c5f8be05bb8b97aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.296615   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key ...
	I1009 18:44:47.296627   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key: {Name:mk151a9783426d352762013576861912ee213cd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.296698   68004 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3
	I1009 18:44:47.296712   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1009 18:44:47.614912   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 ...
	I1009 18:44:47.614937   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3: {Name:mkf40b70da82ca6969886952002da4a653b30ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.615095   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3 ...
	I1009 18:44:47.615110   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3: {Name:mkd83b705c3cec74b71d7424d9484d8c52a44a8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.615192   68004 certs.go:382] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt
	I1009 18:44:47.615283   68004 certs.go:386] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3 -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key
	I1009 18:44:47.615388   68004 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key
	I1009 18:44:47.615408   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt with IP's: []
	I1009 18:44:47.855559   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt ...
	I1009 18:44:47.855590   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt: {Name:mkb45be1e91a0e10b00b60bd353288b3ec0a365b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.855750   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key ...
	I1009 18:44:47.855762   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key: {Name:mk173c05f4fc9659f1f76c6f2e2f3e956fd65bbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.855826   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 18:44:47.855839   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 18:44:47.855850   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 18:44:47.855863   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 18:44:47.855878   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 18:44:47.855890   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 18:44:47.855902   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 18:44:47.855914   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 18:44:47.855955   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 18:44:47.855989   68004 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 18:44:47.855998   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:44:47.856027   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 18:44:47.856050   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:44:47.856071   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 18:44:47.856108   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:44:47.856132   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:47.856159   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem -> /usr/share/ca-certificates/14880.pem
	I1009 18:44:47.856171   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /usr/share/ca-certificates/148802.pem
	I1009 18:44:47.856652   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:44:47.875170   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:44:47.892939   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:44:47.910593   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:44:47.927971   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 18:44:47.945367   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:44:47.962453   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:44:47.979768   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:44:47.996498   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:44:48.015667   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 18:44:48.032775   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 18:44:48.049777   68004 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:44:48.062232   68004 ssh_runner.go:195] Run: openssl version
	I1009 18:44:48.068333   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 18:44:48.076746   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.080306   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.080361   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.114497   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:44:48.123987   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:44:48.134109   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.138265   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.138325   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.173947   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:44:48.182505   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 18:44:48.190879   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.194449   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.194493   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.227813   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 18:44:48.236520   68004 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:44:48.239954   68004 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 18:44:48.240015   68004 kubeadm.go:400] StartCluster: {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:44:48.240093   68004 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:44:48.240133   68004 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:44:48.266457   68004 cri.go:89] found id: ""
	I1009 18:44:48.266520   68004 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:44:48.274981   68004 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:44:48.282927   68004 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:44:48.282975   68004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:44:48.290558   68004 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:44:48.290617   68004 kubeadm.go:157] found existing configuration files:
	
	I1009 18:44:48.290662   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:44:48.297883   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:44:48.297940   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:44:48.305298   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:44:48.312630   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:44:48.312685   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:44:48.320277   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:44:48.328028   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:44:48.328075   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:44:48.335714   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:44:48.343631   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:44:48.343682   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:44:48.351389   68004 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:44:48.409985   68004 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:44:48.468687   68004 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:48:52.176412   68004 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1009 18:48:52.176606   68004 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:48:52.179343   68004 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:48:52.179469   68004 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:48:52.179692   68004 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:48:52.179825   68004 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:48:52.179919   68004 kubeadm.go:318] OS: Linux
	I1009 18:48:52.180033   68004 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:48:52.180167   68004 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:48:52.180261   68004 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:48:52.180339   68004 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:48:52.180423   68004 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:48:52.180506   68004 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:48:52.180585   68004 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:48:52.180650   68004 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:48:52.180730   68004 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:48:52.180858   68004 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:48:52.181038   68004 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:48:52.181129   68004 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:48:52.183066   68004 out.go:252]   - Generating certificates and keys ...
	I1009 18:48:52.183199   68004 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:48:52.183278   68004 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:48:52.183337   68004 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 18:48:52.183388   68004 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 18:48:52.183456   68004 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 18:48:52.183531   68004 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 18:48:52.183609   68004 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 18:48:52.183734   68004 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:48:52.183814   68004 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 18:48:52.183946   68004 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:48:52.184022   68004 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 18:48:52.184077   68004 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 18:48:52.184120   68004 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 18:48:52.184209   68004 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:48:52.184289   68004 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:48:52.184373   68004 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:48:52.184446   68004 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:48:52.184545   68004 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:48:52.184650   68004 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:48:52.184751   68004 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:48:52.184845   68004 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:48:52.187212   68004 out.go:252]   - Booting up control plane ...
	I1009 18:48:52.187314   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:48:52.187403   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:48:52.187495   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:48:52.187618   68004 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:48:52.187764   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:48:52.187905   68004 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:48:52.188016   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:48:52.188092   68004 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:48:52.188271   68004 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:48:52.188367   68004 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:48:52.188438   68004 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001064091s
	I1009 18:48:52.188532   68004 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:48:52.188631   68004 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 18:48:52.188753   68004 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:48:52.188835   68004 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:48:52.188944   68004 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00065849s
	I1009 18:48:52.189053   68004 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000822023s
	I1009 18:48:52.189176   68004 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00103559s
	I1009 18:48:52.189186   68004 kubeadm.go:318] 
	I1009 18:48:52.189288   68004 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:48:52.189417   68004 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:48:52.189507   68004 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:48:52.189604   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:48:52.189710   68004 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:48:52.189827   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:48:52.189851   68004 kubeadm.go:318] 
	W1009 18:48:52.189997   68004 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001064091s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00065849s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000822023s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00103559s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 18:48:52.190074   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 18:48:54.957990   68004 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.767888592s)
	I1009 18:48:54.958062   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:48:54.971165   68004 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:48:54.971216   68004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:48:54.979630   68004 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:48:54.979649   68004 kubeadm.go:157] found existing configuration files:
	
	I1009 18:48:54.979696   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:48:54.987819   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:48:54.987884   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:48:54.995953   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:48:55.003976   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:48:55.004081   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:48:55.011851   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:48:55.019991   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:48:55.020043   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:48:55.027959   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:48:55.036070   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:48:55.036117   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:48:55.043823   68004 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:48:55.102132   68004 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:48:55.161990   68004 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:52:58.820119   68004 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 18:52:58.820247   68004 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:52:58.823463   68004 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:52:58.823551   68004 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:52:58.823686   68004 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:52:58.823770   68004 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:52:58.823834   68004 kubeadm.go:318] OS: Linux
	I1009 18:52:58.823882   68004 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:52:58.823967   68004 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:52:58.824039   68004 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:52:58.824112   68004 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:52:58.824209   68004 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:52:58.824278   68004 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:52:58.824339   68004 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:52:58.824385   68004 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:52:58.824446   68004 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:52:58.824525   68004 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:52:58.824621   68004 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:52:58.824718   68004 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:52:58.828177   68004 out.go:252]   - Generating certificates and keys ...
	I1009 18:52:58.828267   68004 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:52:58.828359   68004 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:52:58.828476   68004 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 18:52:58.828530   68004 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 18:52:58.828586   68004 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 18:52:58.828629   68004 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 18:52:58.828684   68004 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 18:52:58.828737   68004 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 18:52:58.828800   68004 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 18:52:58.828859   68004 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 18:52:58.828890   68004 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 18:52:58.828973   68004 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:52:58.829058   68004 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:52:58.829168   68004 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:52:58.829228   68004 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:52:58.829307   68004 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:52:58.829375   68004 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:52:58.829446   68004 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:52:58.829507   68004 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:52:58.830918   68004 out.go:252]   - Booting up control plane ...
	I1009 18:52:58.831004   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:52:58.831088   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:52:58.831162   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:52:58.831271   68004 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:52:58.831374   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:52:58.831475   68004 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:52:58.831547   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:52:58.831602   68004 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:52:58.831715   68004 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:52:58.831812   68004 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:52:58.831876   68004 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000946171s
	I1009 18:52:58.831960   68004 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:52:58.832028   68004 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 18:52:58.832113   68004 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:52:58.832207   68004 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:52:58.832277   68004 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	I1009 18:52:58.832347   68004 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	I1009 18:52:58.832422   68004 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	I1009 18:52:58.832428   68004 kubeadm.go:318] 
	I1009 18:52:58.832506   68004 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:52:58.832579   68004 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:52:58.832656   68004 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:52:58.832741   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:52:58.832805   68004 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:52:58.832888   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:52:58.832970   68004 kubeadm.go:402] duration metric: took 8m10.592960723s to StartCluster
	I1009 18:52:58.832981   68004 kubeadm.go:318] 
	I1009 18:52:58.833031   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:52:58.833085   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:52:58.861225   68004 cri.go:89] found id: ""
	I1009 18:52:58.861266   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.861281   68004 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:52:58.861287   68004 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:52:58.861341   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:52:58.888167   68004 cri.go:89] found id: ""
	I1009 18:52:58.888195   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.888205   68004 logs.go:284] No container was found matching "etcd"
	I1009 18:52:58.888212   68004 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:52:58.888287   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:52:58.914349   68004 cri.go:89] found id: ""
	I1009 18:52:58.914374   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.914384   68004 logs.go:284] No container was found matching "coredns"
	I1009 18:52:58.914390   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:52:58.914453   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:52:58.940856   68004 cri.go:89] found id: ""
	I1009 18:52:58.940884   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.940892   68004 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:52:58.940898   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:52:58.940949   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:52:58.967634   68004 cri.go:89] found id: ""
	I1009 18:52:58.967660   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.967668   68004 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:52:58.967675   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:52:58.967737   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:52:58.994857   68004 cri.go:89] found id: ""
	I1009 18:52:58.994884   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.994892   68004 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:52:58.994897   68004 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:52:58.994951   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:52:59.022250   68004 cri.go:89] found id: ""
	I1009 18:52:59.022280   68004 logs.go:282] 0 containers: []
	W1009 18:52:59.022296   68004 logs.go:284] No container was found matching "kindnet"
	I1009 18:52:59.022305   68004 logs.go:123] Gathering logs for container status ...
	I1009 18:52:59.022316   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:52:59.050362   68004 logs.go:123] Gathering logs for kubelet ...
	I1009 18:52:59.050466   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:52:59.114521   68004 logs.go:123] Gathering logs for dmesg ...
	I1009 18:52:59.114560   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:52:59.126721   68004 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:52:59.126746   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:52:59.184497   68004 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:52:59.177217    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.177807    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179451    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179888    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.181458    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:52:59.177217    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.177807    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179451    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179888    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.181458    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:52:59.184526   68004 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:52:59.184536   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1009 18:52:59.243650   68004 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 18:52:59.243716   68004 out.go:285] * 
	W1009 18:52:59.243784   68004 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:52:59.243799   68004 out.go:285] * 
	W1009 18:52:59.245479   68004 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:52:59.249165   68004 out.go:203] 
	W1009 18:52:59.250590   68004 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:52:59.250620   68004 out.go:285] * 
	I1009 18:52:59.252112   68004 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 18:52:51 ha-608611 crio[779]: time="2025-10-09T18:52:51.465522506Z" level=info msg="createCtr: removing container cf586613d7d6c7101a35d57ff4399b19125d2d25c376e75e0e2bc342279f87f7" id=e3352a4e-64ef-4e9a-9c72-1baed7bc708b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:52:51 ha-608611 crio[779]: time="2025-10-09T18:52:51.46556055Z" level=info msg="createCtr: deleting container cf586613d7d6c7101a35d57ff4399b19125d2d25c376e75e0e2bc342279f87f7 from storage" id=e3352a4e-64ef-4e9a-9c72-1baed7bc708b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:52:51 ha-608611 crio[779]: time="2025-10-09T18:52:51.467635259Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-608611_kube-system_b479c8e1034fd1754049af8325a8c50b_0" id=e3352a4e-64ef-4e9a-9c72-1baed7bc708b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:52:55 ha-608611 crio[779]: time="2025-10-09T18:52:55.441611161Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=65066702-a229-4609-b3be-b95ea86bd092 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:52:55 ha-608611 crio[779]: time="2025-10-09T18:52:55.442560102Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=5a2a6803-0dbf-410a-8712-f24ffc395435 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:52:55 ha-608611 crio[779]: time="2025-10-09T18:52:55.443443252Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-608611/kube-controller-manager" id=b2b3cb72-5704-446b-b641-c8aff7558569 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:52:55 ha-608611 crio[779]: time="2025-10-09T18:52:55.443644361Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:52:55 ha-608611 crio[779]: time="2025-10-09T18:52:55.447199521Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:52:55 ha-608611 crio[779]: time="2025-10-09T18:52:55.447774782Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:52:55 ha-608611 crio[779]: time="2025-10-09T18:52:55.467448508Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=b2b3cb72-5704-446b-b641-c8aff7558569 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:52:55 ha-608611 crio[779]: time="2025-10-09T18:52:55.468757566Z" level=info msg="createCtr: deleting container ID 4a3568e4088019489c3e4d49bd04682445929bc8d031dae3407ddeb19b2d2883 from idIndex" id=b2b3cb72-5704-446b-b641-c8aff7558569 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:52:55 ha-608611 crio[779]: time="2025-10-09T18:52:55.468793143Z" level=info msg="createCtr: removing container 4a3568e4088019489c3e4d49bd04682445929bc8d031dae3407ddeb19b2d2883" id=b2b3cb72-5704-446b-b641-c8aff7558569 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:52:55 ha-608611 crio[779]: time="2025-10-09T18:52:55.468825677Z" level=info msg="createCtr: deleting container 4a3568e4088019489c3e4d49bd04682445929bc8d031dae3407ddeb19b2d2883 from storage" id=b2b3cb72-5704-446b-b641-c8aff7558569 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:52:55 ha-608611 crio[779]: time="2025-10-09T18:52:55.470877363Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-608611_kube-system_cc9d45d79042caf53449ab6317965aad_0" id=b2b3cb72-5704-446b-b641-c8aff7558569 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:52:57 ha-608611 crio[779]: time="2025-10-09T18:52:57.441462874Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=05044895-524e-4757-8917-94494d2eddfd name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:52:57 ha-608611 crio[779]: time="2025-10-09T18:52:57.442518312Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=c7f0948b-05cd-4ba1-b55f-0e48a01046af name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:52:57 ha-608611 crio[779]: time="2025-10-09T18:52:57.443486229Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-608611/kube-scheduler" id=574d8ca2-8661-4c09-89b4-1029ba6f121b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:52:57 ha-608611 crio[779]: time="2025-10-09T18:52:57.443738598Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:52:57 ha-608611 crio[779]: time="2025-10-09T18:52:57.447241042Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:52:57 ha-608611 crio[779]: time="2025-10-09T18:52:57.447693773Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:52:57 ha-608611 crio[779]: time="2025-10-09T18:52:57.464532754Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=574d8ca2-8661-4c09-89b4-1029ba6f121b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:52:57 ha-608611 crio[779]: time="2025-10-09T18:52:57.46592574Z" level=info msg="createCtr: deleting container ID 741e4dcf56beeb102a7c4a190d63171d49c54008118793c8c0f8479dbfffc181 from idIndex" id=574d8ca2-8661-4c09-89b4-1029ba6f121b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:52:57 ha-608611 crio[779]: time="2025-10-09T18:52:57.465961593Z" level=info msg="createCtr: removing container 741e4dcf56beeb102a7c4a190d63171d49c54008118793c8c0f8479dbfffc181" id=574d8ca2-8661-4c09-89b4-1029ba6f121b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:52:57 ha-608611 crio[779]: time="2025-10-09T18:52:57.46598991Z" level=info msg="createCtr: deleting container 741e4dcf56beeb102a7c4a190d63171d49c54008118793c8c0f8479dbfffc181 from storage" id=574d8ca2-8661-4c09-89b4-1029ba6f121b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:52:57 ha-608611 crio[779]: time="2025-10-09T18:52:57.467935014Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-608611_kube-system_aa829d6ea417a48ecaa6f5cad3254d94_0" id=574d8ca2-8661-4c09-89b4-1029ba6f121b name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:53:00.183205    2680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:53:00.183698    2680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:53:00.185419    2680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:53:00.186009    2680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:53:00.187628    2680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:53:00 up  1:35,  0 user,  load average: 0.03, 0.08, 0.08
	Linux ha-608611 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 18:52:51 ha-608611 kubelet[1930]:         container etcd start failed in pod etcd-ha-608611_kube-system(b479c8e1034fd1754049af8325a8c50b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:52:51 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:52:51 ha-608611 kubelet[1930]: E1009 18:52:51.468118    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-608611" podUID="b479c8e1034fd1754049af8325a8c50b"
	Oct 09 18:52:54 ha-608611 kubelet[1930]: E1009 18:52:54.050751    1930 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-608611.186ce72dd538600d  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-608611,UID:ha-608611,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ha-608611 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ha-608611,},FirstTimestamp:2025-10-09 18:48:58.431807501 +0000 UTC m=+0.618185774,LastTimestamp:2025-10-09 18:48:58.431807501 +0000 UTC m=+0.618185774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-608611,}"
	Oct 09 18:52:55 ha-608611 kubelet[1930]: E1009 18:52:55.064654    1930 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-608611?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 18:52:55 ha-608611 kubelet[1930]: I1009 18:52:55.217626    1930 kubelet_node_status.go:75] "Attempting to register node" node="ha-608611"
	Oct 09 18:52:55 ha-608611 kubelet[1930]: E1009 18:52:55.217993    1930 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-608611"
	Oct 09 18:52:55 ha-608611 kubelet[1930]: E1009 18:52:55.441157    1930 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 18:52:55 ha-608611 kubelet[1930]: E1009 18:52:55.471189    1930 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:52:55 ha-608611 kubelet[1930]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:52:55 ha-608611 kubelet[1930]:  > podSandboxID="2ef2b90afa617b399f6036f17dc5f1152d378da5043adff2fc3afde192bc8693"
	Oct 09 18:52:55 ha-608611 kubelet[1930]: E1009 18:52:55.471314    1930 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:52:55 ha-608611 kubelet[1930]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-608611_kube-system(cc9d45d79042caf53449ab6317965aad): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:52:55 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:52:55 ha-608611 kubelet[1930]: E1009 18:52:55.471350    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-608611" podUID="cc9d45d79042caf53449ab6317965aad"
	Oct 09 18:52:57 ha-608611 kubelet[1930]: E1009 18:52:57.285664    1930 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 09 18:52:57 ha-608611 kubelet[1930]: E1009 18:52:57.440986    1930 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 18:52:57 ha-608611 kubelet[1930]: E1009 18:52:57.468249    1930 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:52:57 ha-608611 kubelet[1930]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:52:57 ha-608611 kubelet[1930]:  > podSandboxID="770c3dd955a8e4513f9e5b862a3cb7f1d4ff6ebd095626539e3d2eb18ba246dc"
	Oct 09 18:52:57 ha-608611 kubelet[1930]: E1009 18:52:57.468347    1930 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:52:57 ha-608611 kubelet[1930]:         container kube-scheduler start failed in pod kube-scheduler-ha-608611_kube-system(aa829d6ea417a48ecaa6f5cad3254d94): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:52:57 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:52:57 ha-608611 kubelet[1930]: E1009 18:52:57.468376    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-608611" podUID="aa829d6ea417a48ecaa6f5cad3254d94"
	Oct 09 18:52:58 ha-608611 kubelet[1930]: E1009 18:52:58.451482    1930 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-608611\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611: exit status 6 (298.052658ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:53:00.567619   73461 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-608611" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (502.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (102.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (90.940806ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-608611" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 kubectl -- rollout status deployment/busybox: exit status 1 (87.988842ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-608611"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (90.889134ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-608611"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 18:53:00.852129   14880 retry.go:31] will retry after 596.512845ms: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (92.282181ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-608611"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 18:53:01.541291   14880 retry.go:31] will retry after 2.054585594s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (87.1335ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-608611"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 18:53:03.684765   14880 retry.go:31] will retry after 1.283230442s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (90.306716ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-608611"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 18:53:05.058639   14880 retry.go:31] will retry after 4.598378194s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (89.655088ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-608611"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 18:53:09.751193   14880 retry.go:31] will retry after 2.818051208s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (88.070667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-608611"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 18:53:12.660436   14880 retry.go:31] will retry after 10.561217677s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (90.795465ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-608611"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 18:53:23.321128   14880 retry.go:31] will retry after 6.799665693s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (89.282833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-608611"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 18:53:30.214668   14880 retry.go:31] will retry after 8.969184198s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (88.990016ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-608611"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 18:53:39.276841   14880 retry.go:31] will retry after 16.445474151s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (91.146648ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-608611"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1009 18:53:55.813897   14880 retry.go:31] will retry after 45.065326916s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (92.726686ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-608611"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (90.987373ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-608611"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 kubectl -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 kubectl -- exec  -- nslookup kubernetes.io: exit status 1 (89.553639ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-608611"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 kubectl -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 kubectl -- exec  -- nslookup kubernetes.default: exit status 1 (88.969236ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-608611"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (87.935578ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-608611"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-608611
helpers_test.go:243: (dbg) docker inspect ha-608611:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	        "Created": "2025-10-09T18:44:43.71277862Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 68571,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:44:43.760299717Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hostname",
	        "HostsPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hosts",
	        "LogPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c-json.log",
	        "Name": "/ha-608611",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-608611:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-608611",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	                "LowerDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-608611",
	                "Source": "/var/lib/docker/volumes/ha-608611/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-608611",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-608611",
	                "name.minikube.sigs.k8s.io": "ha-608611",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4f6557069285c9379d4788b404b85a7f7332b0f0915fb426eb2d3ffb6f02df65",
	            "SandboxKey": "/var/run/docker/netns/4f6557069285",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-608611": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:dc:55:21:78:3f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d41ad8abecfe5e57fea462a2d7f6665aa3879de8bfc3fe0269f712186c14e257",
	                    "EndpointID": "322add21e309d24bef79b6b7f428ea8a1994c3d46e02d36bb4debf9950e6c0a5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-608611",
	                        "92fc23109156"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611: exit status 6 (294.775401ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:54:41.636604   74465 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-753440 image ls --format json --alsologtostderr                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image          │ functional-753440 image ls --format table --alsologtostderr                                                     │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image          │ functional-753440 image ls                                                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ delete         │ -p functional-753440                                                                                            │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:44 UTC │ 09 Oct 25 18:44 UTC │
	│ start          │ ha-608611 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:44 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- rollout status deployment/busybox                                                          │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:44:38
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:44:38.499708   68004 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:44:38.499979   68004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:44:38.499990   68004 out.go:374] Setting ErrFile to fd 2...
	I1009 18:44:38.499995   68004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:44:38.500193   68004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:44:38.500672   68004 out.go:368] Setting JSON to false
	I1009 18:44:38.501534   68004 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5226,"bootTime":1760030252,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:44:38.501651   68004 start.go:141] virtualization: kvm guest
	I1009 18:44:38.503753   68004 out.go:179] * [ha-608611] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:44:38.505161   68004 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:44:38.505174   68004 notify.go:220] Checking for updates...
	I1009 18:44:38.507971   68004 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:44:38.509361   68004 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:44:38.510823   68004 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:44:38.512241   68004 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:44:38.513815   68004 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:44:38.515465   68004 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:44:38.539241   68004 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:44:38.539344   68004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:44:38.597491   68004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:44:38.585969456 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:44:38.597607   68004 docker.go:318] overlay module found
	I1009 18:44:38.599712   68004 out.go:179] * Using the docker driver based on user configuration
	I1009 18:44:38.601190   68004 start.go:305] selected driver: docker
	I1009 18:44:38.601208   68004 start.go:925] validating driver "docker" against <nil>
	I1009 18:44:38.601220   68004 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:44:38.601773   68004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:44:38.656624   68004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:44:38.646723999 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:44:38.656772   68004 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 18:44:38.656973   68004 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:44:38.658777   68004 out.go:179] * Using Docker driver with root privileges
	I1009 18:44:38.660475   68004 cni.go:84] Creating CNI manager for ""
	I1009 18:44:38.660538   68004 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 18:44:38.660548   68004 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:44:38.660625   68004 start.go:349] cluster config:
	{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1009 18:44:38.662228   68004 out.go:179] * Starting "ha-608611" primary control-plane node in "ha-608611" cluster
	I1009 18:44:38.663758   68004 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:44:38.665163   68004 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:44:38.666518   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:38.666553   68004 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:44:38.666561   68004 cache.go:64] Caching tarball of preloaded images
	I1009 18:44:38.666652   68004 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:44:38.666665   68004 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:44:38.666636   68004 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:44:38.667052   68004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:44:38.667080   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json: {Name:mk7eb36c0f629760ce25ed6ea0be36fe97501d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:38.687956   68004 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 18:44:38.687977   68004 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 18:44:38.687999   68004 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:44:38.688029   68004 start.go:360] acquireMachinesLock for ha-608611: {Name:mk7579977ab708dc80cadd5f1683dbd9d0a08d4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:44:38.688196   68004 start.go:364] duration metric: took 118.358µs to acquireMachinesLock for "ha-608611"
	I1009 18:44:38.688228   68004 start.go:93] Provisioning new machine with config: &{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:44:38.688308   68004 start.go:125] createHost starting for "" (driver="docker")
	I1009 18:44:38.690596   68004 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 18:44:38.690877   68004 start.go:159] libmachine.API.Create for "ha-608611" (driver="docker")
	I1009 18:44:38.690915   68004 client.go:168] LocalClient.Create starting
	I1009 18:44:38.691016   68004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem
	I1009 18:44:38.691065   68004 main.go:141] libmachine: Decoding PEM data...
	I1009 18:44:38.691090   68004 main.go:141] libmachine: Parsing certificate...
	I1009 18:44:38.691160   68004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem
	I1009 18:44:38.691207   68004 main.go:141] libmachine: Decoding PEM data...
	I1009 18:44:38.691219   68004 main.go:141] libmachine: Parsing certificate...
	I1009 18:44:38.691649   68004 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:44:38.708961   68004 cli_runner.go:211] docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:44:38.709049   68004 network_create.go:284] running [docker network inspect ha-608611] to gather additional debugging logs...
	I1009 18:44:38.709068   68004 cli_runner.go:164] Run: docker network inspect ha-608611
	W1009 18:44:38.724919   68004 cli_runner.go:211] docker network inspect ha-608611 returned with exit code 1
	I1009 18:44:38.724948   68004 network_create.go:287] error running [docker network inspect ha-608611]: docker network inspect ha-608611: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-608611 not found
	I1009 18:44:38.724959   68004 network_create.go:289] output of [docker network inspect ha-608611]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-608611 not found
	
	** /stderr **
	I1009 18:44:38.725077   68004 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:44:38.743440   68004 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e06100}
	I1009 18:44:38.743492   68004 network_create.go:124] attempt to create docker network ha-608611 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 18:44:38.743548   68004 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-608611 ha-608611
	I1009 18:44:38.802772   68004 network_create.go:108] docker network ha-608611 192.168.49.0/24 created
	I1009 18:44:38.802822   68004 kic.go:121] calculated static IP "192.168.49.2" for the "ha-608611" container
	I1009 18:44:38.802881   68004 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:44:38.820080   68004 cli_runner.go:164] Run: docker volume create ha-608611 --label name.minikube.sigs.k8s.io=ha-608611 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:44:38.840522   68004 oci.go:103] Successfully created a docker volume ha-608611
	I1009 18:44:38.840615   68004 cli_runner.go:164] Run: docker run --rm --name ha-608611-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-608611 --entrypoint /usr/bin/test -v ha-608611:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 18:44:39.244353   68004 oci.go:107] Successfully prepared a docker volume ha-608611
	I1009 18:44:39.244424   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:39.244433   68004 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 18:44:39.244478   68004 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-608611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 18:44:43.640122   68004 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-608611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.39557595s)
	I1009 18:44:43.640175   68004 kic.go:203] duration metric: took 4.395736393s to extract preloaded images to volume ...
	W1009 18:44:43.640303   68004 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 18:44:43.640358   68004 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 18:44:43.640405   68004 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:44:43.696295   68004 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-608611 --name ha-608611 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-608611 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-608611 --network ha-608611 --ip 192.168.49.2 --volume ha-608611:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 18:44:43.979679   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Running}}
	I1009 18:44:43.998229   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.017435   68004 cli_runner.go:164] Run: docker exec ha-608611 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:44:44.066674   68004 oci.go:144] the created container "ha-608611" has a running status.
	I1009 18:44:44.066704   68004 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa...
	I1009 18:44:44.380025   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 18:44:44.380087   68004 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:44:44.405345   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.425476   68004 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:44:44.425501   68004 kic_runner.go:114] Args: [docker exec --privileged ha-608611 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:44:44.469260   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.488635   68004 machine.go:93] provisionDockerMachine start ...
	I1009 18:44:44.488729   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.507225   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.507570   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.507596   68004 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:44:44.655038   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:44:44.655067   68004 ubuntu.go:182] provisioning hostname "ha-608611"
	I1009 18:44:44.655128   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.673982   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.674208   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.674222   68004 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-608611 && echo "ha-608611" | sudo tee /etc/hostname
	I1009 18:44:44.830321   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:44:44.830415   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.848252   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.848464   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.848481   68004 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-608611' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-608611/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-608611' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:44:44.995953   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:44:44.995980   68004 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 18:44:44.995996   68004 ubuntu.go:190] setting up certificates
	I1009 18:44:44.996004   68004 provision.go:84] configureAuth start
	I1009 18:44:44.996061   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.014319   68004 provision.go:143] copyHostCerts
	I1009 18:44:45.014359   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:44:45.014401   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 18:44:45.014411   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:44:45.014491   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 18:44:45.014585   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:44:45.014614   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 18:44:45.014624   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:44:45.014668   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 18:44:45.014744   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:44:45.014769   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 18:44:45.014773   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:44:45.014812   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 18:44:45.014890   68004 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.ha-608611 san=[127.0.0.1 192.168.49.2 ha-608611 localhost minikube]
	I1009 18:44:45.062086   68004 provision.go:177] copyRemoteCerts
	I1009 18:44:45.062191   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:44:45.062224   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.079568   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.182503   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 18:44:45.182590   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 18:44:45.201898   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 18:44:45.201952   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 18:44:45.219004   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 18:44:45.219061   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:44:45.236354   68004 provision.go:87] duration metric: took 240.321663ms to configureAuth
	I1009 18:44:45.236386   68004 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:44:45.236591   68004 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:44:45.236715   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.255084   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:45.255329   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:45.255352   68004 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:44:45.508555   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:44:45.508584   68004 machine.go:96] duration metric: took 1.01992839s to provisionDockerMachine
	I1009 18:44:45.508595   68004 client.go:171] duration metric: took 6.817674141s to LocalClient.Create
	I1009 18:44:45.508615   68004 start.go:167] duration metric: took 6.817737923s to libmachine.API.Create "ha-608611"
	I1009 18:44:45.508627   68004 start.go:293] postStartSetup for "ha-608611" (driver="docker")
	I1009 18:44:45.508641   68004 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:44:45.508698   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:44:45.508733   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.526223   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.630313   68004 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:44:45.633862   68004 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:44:45.633886   68004 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:44:45.633896   68004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 18:44:45.633937   68004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 18:44:45.634010   68004 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 18:44:45.634020   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /etc/ssl/certs/148802.pem
	I1009 18:44:45.634128   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:44:45.641735   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:44:45.661588   68004 start.go:296] duration metric: took 152.943683ms for postStartSetup
	I1009 18:44:45.661893   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.680048   68004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:44:45.680316   68004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:44:45.680352   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.696877   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.796243   68004 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:44:45.800700   68004 start.go:128] duration metric: took 7.112375109s to createHost
	I1009 18:44:45.800729   68004 start.go:83] releasing machines lock for "ha-608611", held for 7.112518345s
	I1009 18:44:45.800791   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.818595   68004 ssh_runner.go:195] Run: cat /version.json
	I1009 18:44:45.818630   68004 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:44:45.818641   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.818688   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.836603   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.836837   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.989177   68004 ssh_runner.go:195] Run: systemctl --version
	I1009 18:44:45.995896   68004 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:44:46.030619   68004 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:44:46.035429   68004 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:44:46.035494   68004 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:44:46.061922   68004 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 18:44:46.061944   68004 start.go:495] detecting cgroup driver to use...
	I1009 18:44:46.061975   68004 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:44:46.062026   68004 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:44:46.077423   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:44:46.089316   68004 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:44:46.089367   68004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:44:46.105696   68004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:44:46.122777   68004 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:44:46.202639   68004 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:44:46.294647   68004 docker.go:234] disabling docker service ...
	I1009 18:44:46.294704   68004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:44:46.312549   68004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:44:46.324800   68004 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:44:46.403433   68004 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:44:46.481222   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:44:46.493645   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:44:46.507931   68004 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:44:46.507979   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.518504   68004 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 18:44:46.518561   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.527328   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.535888   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.544437   68004 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:44:46.552112   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.560275   68004 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.573155   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.581642   68004 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:44:46.588485   68004 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:44:46.595486   68004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:44:46.674187   68004 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:44:46.778236   68004 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:44:46.778294   68004 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:44:46.782264   68004 start.go:563] Will wait 60s for crictl version
	I1009 18:44:46.782319   68004 ssh_runner.go:195] Run: which crictl
	I1009 18:44:46.785887   68004 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:44:46.809717   68004 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:44:46.809792   68004 ssh_runner.go:195] Run: crio --version
	I1009 18:44:46.837446   68004 ssh_runner.go:195] Run: crio --version
	I1009 18:44:46.867516   68004 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:44:46.869002   68004 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:44:46.886298   68004 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:44:46.890354   68004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:44:46.901206   68004 kubeadm.go:883] updating cluster {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:44:46.901331   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:46.901390   68004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:44:46.933183   68004 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:44:46.933203   68004 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:44:46.933255   68004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:44:46.959025   68004 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:44:46.959053   68004 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:44:46.959062   68004 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 18:44:46.959174   68004 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-608611 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:44:46.959248   68004 ssh_runner.go:195] Run: crio config
	I1009 18:44:47.005223   68004 cni.go:84] Creating CNI manager for ""
	I1009 18:44:47.005245   68004 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 18:44:47.005269   68004 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:44:47.005302   68004 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-608611 NodeName:ha-608611 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:44:47.005420   68004 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-608611"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:44:47.005441   68004 kube-vip.go:115] generating kube-vip config ...
	I1009 18:44:47.005483   68004 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 18:44:47.017646   68004 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:44:47.017751   68004 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1009 18:44:47.017813   68004 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:44:47.025763   68004 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:44:47.025815   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 18:44:47.033769   68004 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 18:44:47.046390   68004 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:44:47.062352   68004 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 18:44:47.075248   68004 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1009 18:44:47.090154   68004 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 18:44:47.093985   68004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:44:47.104234   68004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:44:47.185443   68004 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:44:47.207477   68004 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611 for IP: 192.168.49.2
	I1009 18:44:47.207503   68004 certs.go:195] generating shared ca certs ...
	I1009 18:44:47.207525   68004 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.207676   68004 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 18:44:47.207726   68004 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 18:44:47.207736   68004 certs.go:257] generating profile certs ...
	I1009 18:44:47.207784   68004 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key
	I1009 18:44:47.207802   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt with IP's: []
	I1009 18:44:47.296415   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt ...
	I1009 18:44:47.296444   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt: {Name:mka7495c49ff81b322387640c5f8be05bb8b97aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.296615   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key ...
	I1009 18:44:47.296627   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key: {Name:mk151a9783426d352762013576861912ee213cd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.296698   68004 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3
	I1009 18:44:47.296712   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1009 18:44:47.614912   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 ...
	I1009 18:44:47.614937   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3: {Name:mkf40b70da82ca6969886952002da4a653b30ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.615095   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3 ...
	I1009 18:44:47.615110   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3: {Name:mkd83b705c3cec74b71d7424d9484d8c52a44a8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.615192   68004 certs.go:382] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt
	I1009 18:44:47.615283   68004 certs.go:386] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3 -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key
	I1009 18:44:47.615388   68004 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key
	I1009 18:44:47.615408   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt with IP's: []
	I1009 18:44:47.855559   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt ...
	I1009 18:44:47.855590   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt: {Name:mkb45be1e91a0e10b00b60bd353288b3ec0a365b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.855750   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key ...
	I1009 18:44:47.855762   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key: {Name:mk173c05f4fc9659f1f76c6f2e2f3e956fd65bbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.855826   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 18:44:47.855839   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 18:44:47.855850   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 18:44:47.855863   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 18:44:47.855878   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 18:44:47.855890   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 18:44:47.855902   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 18:44:47.855914   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 18:44:47.855955   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 18:44:47.855989   68004 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 18:44:47.855998   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:44:47.856027   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 18:44:47.856050   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:44:47.856071   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 18:44:47.856108   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:44:47.856132   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:47.856159   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem -> /usr/share/ca-certificates/14880.pem
	I1009 18:44:47.856171   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /usr/share/ca-certificates/148802.pem
	I1009 18:44:47.856652   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:44:47.875170   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:44:47.892939   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:44:47.910593   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:44:47.927971   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 18:44:47.945367   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:44:47.962453   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:44:47.979768   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:44:47.996498   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:44:48.015667   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 18:44:48.032775   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 18:44:48.049777   68004 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:44:48.062232   68004 ssh_runner.go:195] Run: openssl version
	I1009 18:44:48.068333   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 18:44:48.076746   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.080306   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.080361   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.114497   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:44:48.123987   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:44:48.134109   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.138265   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.138325   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.173947   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:44:48.182505   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 18:44:48.190879   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.194449   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.194493   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.227813   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 18:44:48.236520   68004 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:44:48.239954   68004 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 18:44:48.240015   68004 kubeadm.go:400] StartCluster: {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:44:48.240093   68004 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:44:48.240133   68004 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:44:48.266457   68004 cri.go:89] found id: ""
	I1009 18:44:48.266520   68004 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:44:48.274981   68004 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:44:48.282927   68004 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:44:48.282975   68004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:44:48.290558   68004 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:44:48.290617   68004 kubeadm.go:157] found existing configuration files:
	
	I1009 18:44:48.290662   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:44:48.297883   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:44:48.297940   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:44:48.305298   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:44:48.312630   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:44:48.312685   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:44:48.320277   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:44:48.328028   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:44:48.328075   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:44:48.335714   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:44:48.343631   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:44:48.343682   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:44:48.351389   68004 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:44:48.409985   68004 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:44:48.468687   68004 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:48:52.176412   68004 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1009 18:48:52.176606   68004 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:48:52.179343   68004 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:48:52.179469   68004 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:48:52.179692   68004 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:48:52.179825   68004 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:48:52.179919   68004 kubeadm.go:318] OS: Linux
	I1009 18:48:52.180033   68004 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:48:52.180167   68004 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:48:52.180261   68004 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:48:52.180339   68004 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:48:52.180423   68004 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:48:52.180506   68004 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:48:52.180585   68004 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:48:52.180650   68004 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:48:52.180730   68004 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:48:52.180858   68004 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:48:52.181038   68004 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:48:52.181129   68004 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:48:52.183066   68004 out.go:252]   - Generating certificates and keys ...
	I1009 18:48:52.183199   68004 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:48:52.183278   68004 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:48:52.183337   68004 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 18:48:52.183388   68004 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 18:48:52.183456   68004 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 18:48:52.183531   68004 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 18:48:52.183609   68004 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 18:48:52.183734   68004 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:48:52.183814   68004 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 18:48:52.183946   68004 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:48:52.184022   68004 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 18:48:52.184077   68004 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 18:48:52.184120   68004 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 18:48:52.184209   68004 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:48:52.184289   68004 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:48:52.184373   68004 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:48:52.184446   68004 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:48:52.184545   68004 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:48:52.184650   68004 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:48:52.184751   68004 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:48:52.184845   68004 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:48:52.187212   68004 out.go:252]   - Booting up control plane ...
	I1009 18:48:52.187314   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:48:52.187403   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:48:52.187495   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:48:52.187618   68004 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:48:52.187764   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:48:52.187905   68004 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:48:52.188016   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:48:52.188092   68004 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:48:52.188271   68004 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:48:52.188367   68004 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:48:52.188438   68004 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001064091s
	I1009 18:48:52.188532   68004 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:48:52.188631   68004 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 18:48:52.188753   68004 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:48:52.188835   68004 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:48:52.188944   68004 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00065849s
	I1009 18:48:52.189053   68004 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000822023s
	I1009 18:48:52.189176   68004 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00103559s
	I1009 18:48:52.189186   68004 kubeadm.go:318] 
	I1009 18:48:52.189288   68004 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:48:52.189417   68004 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:48:52.189507   68004 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:48:52.189604   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:48:52.189710   68004 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:48:52.189827   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:48:52.189851   68004 kubeadm.go:318] 
	W1009 18:48:52.189997   68004 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001064091s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00065849s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000822023s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00103559s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 18:48:52.190074   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 18:48:54.957990   68004 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.767888592s)
	I1009 18:48:54.958062   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:48:54.971165   68004 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:48:54.971216   68004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:48:54.979630   68004 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:48:54.979649   68004 kubeadm.go:157] found existing configuration files:
	
	I1009 18:48:54.979696   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:48:54.987819   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:48:54.987884   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:48:54.995953   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:48:55.003976   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:48:55.004081   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:48:55.011851   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:48:55.019991   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:48:55.020043   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:48:55.027959   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:48:55.036070   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:48:55.036117   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:48:55.043823   68004 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:48:55.102132   68004 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:48:55.161990   68004 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:52:58.820119   68004 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 18:52:58.820247   68004 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:52:58.823463   68004 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:52:58.823551   68004 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:52:58.823686   68004 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:52:58.823770   68004 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:52:58.823834   68004 kubeadm.go:318] OS: Linux
	I1009 18:52:58.823882   68004 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:52:58.823967   68004 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:52:58.824039   68004 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:52:58.824112   68004 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:52:58.824209   68004 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:52:58.824278   68004 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:52:58.824339   68004 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:52:58.824385   68004 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:52:58.824446   68004 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:52:58.824525   68004 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:52:58.824621   68004 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:52:58.824718   68004 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:52:58.828177   68004 out.go:252]   - Generating certificates and keys ...
	I1009 18:52:58.828267   68004 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:52:58.828359   68004 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:52:58.828476   68004 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 18:52:58.828530   68004 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 18:52:58.828586   68004 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 18:52:58.828629   68004 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 18:52:58.828684   68004 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 18:52:58.828737   68004 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 18:52:58.828800   68004 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 18:52:58.828859   68004 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 18:52:58.828890   68004 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 18:52:58.828973   68004 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:52:58.829058   68004 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:52:58.829168   68004 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:52:58.829228   68004 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:52:58.829307   68004 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:52:58.829375   68004 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:52:58.829446   68004 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:52:58.829507   68004 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:52:58.830918   68004 out.go:252]   - Booting up control plane ...
	I1009 18:52:58.831004   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:52:58.831088   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:52:58.831162   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:52:58.831271   68004 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:52:58.831374   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:52:58.831475   68004 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:52:58.831547   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:52:58.831602   68004 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:52:58.831715   68004 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:52:58.831812   68004 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:52:58.831876   68004 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000946171s
	I1009 18:52:58.831960   68004 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:52:58.832028   68004 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 18:52:58.832113   68004 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:52:58.832207   68004 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:52:58.832277   68004 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	I1009 18:52:58.832347   68004 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	I1009 18:52:58.832422   68004 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	I1009 18:52:58.832428   68004 kubeadm.go:318] 
	I1009 18:52:58.832506   68004 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:52:58.832579   68004 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:52:58.832656   68004 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:52:58.832741   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:52:58.832805   68004 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:52:58.832888   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:52:58.832970   68004 kubeadm.go:402] duration metric: took 8m10.592960723s to StartCluster
	I1009 18:52:58.832981   68004 kubeadm.go:318] 
	I1009 18:52:58.833031   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:52:58.833085   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:52:58.861225   68004 cri.go:89] found id: ""
	I1009 18:52:58.861266   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.861281   68004 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:52:58.861287   68004 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:52:58.861341   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:52:58.888167   68004 cri.go:89] found id: ""
	I1009 18:52:58.888195   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.888205   68004 logs.go:284] No container was found matching "etcd"
	I1009 18:52:58.888212   68004 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:52:58.888287   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:52:58.914349   68004 cri.go:89] found id: ""
	I1009 18:52:58.914374   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.914384   68004 logs.go:284] No container was found matching "coredns"
	I1009 18:52:58.914390   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:52:58.914453   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:52:58.940856   68004 cri.go:89] found id: ""
	I1009 18:52:58.940884   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.940892   68004 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:52:58.940898   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:52:58.940949   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:52:58.967634   68004 cri.go:89] found id: ""
	I1009 18:52:58.967660   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.967668   68004 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:52:58.967675   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:52:58.967737   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:52:58.994857   68004 cri.go:89] found id: ""
	I1009 18:52:58.994884   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.994892   68004 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:52:58.994897   68004 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:52:58.994951   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:52:59.022250   68004 cri.go:89] found id: ""
	I1009 18:52:59.022280   68004 logs.go:282] 0 containers: []
	W1009 18:52:59.022296   68004 logs.go:284] No container was found matching "kindnet"
	I1009 18:52:59.022305   68004 logs.go:123] Gathering logs for container status ...
	I1009 18:52:59.022316   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:52:59.050362   68004 logs.go:123] Gathering logs for kubelet ...
	I1009 18:52:59.050466   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:52:59.114521   68004 logs.go:123] Gathering logs for dmesg ...
	I1009 18:52:59.114560   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:52:59.126721   68004 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:52:59.126746   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:52:59.184497   68004 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:52:59.177217    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.177807    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179451    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179888    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.181458    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:52:59.177217    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.177807    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179451    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179888    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.181458    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:52:59.184526   68004 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:52:59.184536   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1009 18:52:59.243650   68004 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 18:52:59.243716   68004 out.go:285] * 
	W1009 18:52:59.243784   68004 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:52:59.243799   68004 out.go:285] * 
	W1009 18:52:59.245479   68004 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:52:59.249165   68004 out.go:203] 
	W1009 18:52:59.250590   68004 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:52:59.250620   68004 out.go:285] * 
	I1009 18:52:59.252112   68004 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 18:54:38 ha-608611 crio[779]: time="2025-10-09T18:54:38.463874433Z" level=info msg="createCtr: removing container 1f773326f9a9078eb5d1abe1ab99b36cbcae3da5113f26b735aa5ab717d5c059" id=bea9fa44-33c4-4c47-b7c8-0cfa7c746858 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:38 ha-608611 crio[779]: time="2025-10-09T18:54:38.46390603Z" level=info msg="createCtr: deleting container 1f773326f9a9078eb5d1abe1ab99b36cbcae3da5113f26b735aa5ab717d5c059 from storage" id=bea9fa44-33c4-4c47-b7c8-0cfa7c746858 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:38 ha-608611 crio[779]: time="2025-10-09T18:54:38.465938625Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-608611_kube-system_aa829d6ea417a48ecaa6f5cad3254d94_0" id=bea9fa44-33c4-4c47-b7c8-0cfa7c746858 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.441562226Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=5a6f584d-307d-492b-a663-2ac01c27f2ee name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.442494382Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=76f712ad-5145-4cc4-a27e-b1fa376b76ca name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.443368198Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-608611/kube-controller-manager" id=8917b73c-c4e6-4e87-8d87-409c0fa122c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.443580222Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.446973039Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.447404129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.46683003Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8917b73c-c4e6-4e87-8d87-409c0fa122c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.468112955Z" level=info msg="createCtr: deleting container ID 5b0ee951a8d30dc41cbe0e80f8fd65534c65d3a6b97e8d5542e2681b411dba7d from idIndex" id=8917b73c-c4e6-4e87-8d87-409c0fa122c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.468168675Z" level=info msg="createCtr: removing container 5b0ee951a8d30dc41cbe0e80f8fd65534c65d3a6b97e8d5542e2681b411dba7d" id=8917b73c-c4e6-4e87-8d87-409c0fa122c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.468207713Z" level=info msg="createCtr: deleting container 5b0ee951a8d30dc41cbe0e80f8fd65534c65d3a6b97e8d5542e2681b411dba7d from storage" id=8917b73c-c4e6-4e87-8d87-409c0fa122c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.470223387Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-608611_kube-system_cc9d45d79042caf53449ab6317965aad_0" id=8917b73c-c4e6-4e87-8d87-409c0fa122c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.441918254Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=5f830916-7502-45c7-a992-b1afe6a4ec2f name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.442961662Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=ce442719-daad-4875-88bf-1eae8be1d0eb name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.443900487Z" level=info msg="Creating container: kube-system/etcd-ha-608611/etcd" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.444174088Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.448745276Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.449318807Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.46398444Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.465375584Z" level=info msg="createCtr: deleting container ID 83743aebcddc36aef5c02af3dcd233f5d07925ba9d0281ad1316ac7a648aa44c from idIndex" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.465420508Z" level=info msg="createCtr: removing container 83743aebcddc36aef5c02af3dcd233f5d07925ba9d0281ad1316ac7a648aa44c" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.465459824Z" level=info msg="createCtr: deleting container 83743aebcddc36aef5c02af3dcd233f5d07925ba9d0281ad1316ac7a648aa44c from storage" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.467757138Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-608611_kube-system_b479c8e1034fd1754049af8325a8c50b_0" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:54:42.211289    3035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:42.211803    3035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:42.213341    3035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:42.213810    3035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:42.215388    3035 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:54:42 up  1:37,  0 user,  load average: 0.00, 0.05, 0.07
	Linux ha-608611 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 18:54:38 ha-608611 kubelet[1930]:  > podSandboxID="770c3dd955a8e4513f9e5b862a3cb7f1d4ff6ebd095626539e3d2eb18ba246dc"
	Oct 09 18:54:38 ha-608611 kubelet[1930]: E1009 18:54:38.466354    1930 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:54:38 ha-608611 kubelet[1930]:         container kube-scheduler start failed in pod kube-scheduler-ha-608611_kube-system(aa829d6ea417a48ecaa6f5cad3254d94): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:38 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:54:38 ha-608611 kubelet[1930]: E1009 18:54:38.466380    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-608611" podUID="aa829d6ea417a48ecaa6f5cad3254d94"
	Oct 09 18:54:38 ha-608611 kubelet[1930]: E1009 18:54:38.621301    1930 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 09 18:54:40 ha-608611 kubelet[1930]: E1009 18:54:40.079705    1930 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-608611?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 18:54:40 ha-608611 kubelet[1930]: I1009 18:54:40.247858    1930 kubelet_node_status.go:75] "Attempting to register node" node="ha-608611"
	Oct 09 18:54:40 ha-608611 kubelet[1930]: E1009 18:54:40.248295    1930 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-608611"
	Oct 09 18:54:40 ha-608611 kubelet[1930]: E1009 18:54:40.441066    1930 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 18:54:40 ha-608611 kubelet[1930]: E1009 18:54:40.470513    1930 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:54:40 ha-608611 kubelet[1930]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:40 ha-608611 kubelet[1930]:  > podSandboxID="2ef2b90afa617b399f6036f17dc5f1152d378da5043adff2fc3afde192bc8693"
	Oct 09 18:54:40 ha-608611 kubelet[1930]: E1009 18:54:40.470610    1930 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:54:40 ha-608611 kubelet[1930]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-608611_kube-system(cc9d45d79042caf53449ab6317965aad): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:40 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:54:40 ha-608611 kubelet[1930]: E1009 18:54:40.470638    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-608611" podUID="cc9d45d79042caf53449ab6317965aad"
	Oct 09 18:54:41 ha-608611 kubelet[1930]: E1009 18:54:41.441458    1930 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 18:54:41 ha-608611 kubelet[1930]: E1009 18:54:41.468106    1930 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:54:41 ha-608611 kubelet[1930]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:41 ha-608611 kubelet[1930]:  > podSandboxID="85e631b34b7cd8e30736ecbe7d81581bf5cedb0c5abd8815458e28a54592f51e"
	Oct 09 18:54:41 ha-608611 kubelet[1930]: E1009 18:54:41.468242    1930 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:54:41 ha-608611 kubelet[1930]:         container etcd start failed in pod etcd-ha-608611_kube-system(b479c8e1034fd1754049af8325a8c50b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:41 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:54:41 ha-608611 kubelet[1930]: E1009 18:54:41.468280    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-608611" podUID="b479c8e1034fd1754049af8325a8c50b"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611: exit status 6 (288.827215ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:54:42.576683   74805 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-608611" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (102.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (89.849738ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-608611"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-608611
helpers_test.go:243: (dbg) docker inspect ha-608611:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	        "Created": "2025-10-09T18:44:43.71277862Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 68571,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:44:43.760299717Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hostname",
	        "HostsPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hosts",
	        "LogPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c-json.log",
	        "Name": "/ha-608611",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-608611:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-608611",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	                "LowerDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-608611",
	                "Source": "/var/lib/docker/volumes/ha-608611/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-608611",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-608611",
	                "name.minikube.sigs.k8s.io": "ha-608611",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4f6557069285c9379d4788b404b85a7f7332b0f0915fb426eb2d3ffb6f02df65",
	            "SandboxKey": "/var/run/docker/netns/4f6557069285",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-608611": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:dc:55:21:78:3f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d41ad8abecfe5e57fea462a2d7f6665aa3879de8bfc3fe0269f712186c14e257",
	                    "EndpointID": "322add21e309d24bef79b6b7f428ea8a1994c3d46e02d36bb4debf9950e6c0a5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-608611",
	                        "92fc23109156"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611: exit status 6 (281.802831ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:54:42.967352   74951 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-753440 image ls --format table --alsologtostderr                                                     │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image          │ functional-753440 image ls                                                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ delete         │ -p functional-753440                                                                                            │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:44 UTC │ 09 Oct 25 18:44 UTC │
	│ start          │ ha-608611 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:44 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- rollout status deployment/busybox                                                          │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:44:38
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:44:38.499708   68004 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:44:38.499979   68004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:44:38.499990   68004 out.go:374] Setting ErrFile to fd 2...
	I1009 18:44:38.499995   68004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:44:38.500193   68004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:44:38.500672   68004 out.go:368] Setting JSON to false
	I1009 18:44:38.501534   68004 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5226,"bootTime":1760030252,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:44:38.501651   68004 start.go:141] virtualization: kvm guest
	I1009 18:44:38.503753   68004 out.go:179] * [ha-608611] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:44:38.505161   68004 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:44:38.505174   68004 notify.go:220] Checking for updates...
	I1009 18:44:38.507971   68004 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:44:38.509361   68004 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:44:38.510823   68004 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:44:38.512241   68004 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:44:38.513815   68004 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:44:38.515465   68004 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:44:38.539241   68004 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:44:38.539344   68004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:44:38.597491   68004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:44:38.585969456 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:44:38.597607   68004 docker.go:318] overlay module found
	I1009 18:44:38.599712   68004 out.go:179] * Using the docker driver based on user configuration
	I1009 18:44:38.601190   68004 start.go:305] selected driver: docker
	I1009 18:44:38.601208   68004 start.go:925] validating driver "docker" against <nil>
	I1009 18:44:38.601220   68004 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:44:38.601773   68004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:44:38.656624   68004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:44:38.646723999 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:44:38.656772   68004 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 18:44:38.656973   68004 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:44:38.658777   68004 out.go:179] * Using Docker driver with root privileges
	I1009 18:44:38.660475   68004 cni.go:84] Creating CNI manager for ""
	I1009 18:44:38.660538   68004 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 18:44:38.660548   68004 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:44:38.660625   68004 start.go:349] cluster config:
	{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1009 18:44:38.662228   68004 out.go:179] * Starting "ha-608611" primary control-plane node in "ha-608611" cluster
	I1009 18:44:38.663758   68004 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:44:38.665163   68004 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:44:38.666518   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:38.666553   68004 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:44:38.666561   68004 cache.go:64] Caching tarball of preloaded images
	I1009 18:44:38.666652   68004 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:44:38.666665   68004 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:44:38.666636   68004 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:44:38.667052   68004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:44:38.667080   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json: {Name:mk7eb36c0f629760ce25ed6ea0be36fe97501d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:38.687956   68004 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 18:44:38.687977   68004 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 18:44:38.687999   68004 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:44:38.688029   68004 start.go:360] acquireMachinesLock for ha-608611: {Name:mk7579977ab708dc80cadd5f1683dbd9d0a08d4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:44:38.688196   68004 start.go:364] duration metric: took 118.358µs to acquireMachinesLock for "ha-608611"
	I1009 18:44:38.688228   68004 start.go:93] Provisioning new machine with config: &{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:44:38.688308   68004 start.go:125] createHost starting for "" (driver="docker")
	I1009 18:44:38.690596   68004 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 18:44:38.690877   68004 start.go:159] libmachine.API.Create for "ha-608611" (driver="docker")
	I1009 18:44:38.690915   68004 client.go:168] LocalClient.Create starting
	I1009 18:44:38.691016   68004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem
	I1009 18:44:38.691065   68004 main.go:141] libmachine: Decoding PEM data...
	I1009 18:44:38.691090   68004 main.go:141] libmachine: Parsing certificate...
	I1009 18:44:38.691160   68004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem
	I1009 18:44:38.691207   68004 main.go:141] libmachine: Decoding PEM data...
	I1009 18:44:38.691219   68004 main.go:141] libmachine: Parsing certificate...
	I1009 18:44:38.691649   68004 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:44:38.708961   68004 cli_runner.go:211] docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:44:38.709049   68004 network_create.go:284] running [docker network inspect ha-608611] to gather additional debugging logs...
	I1009 18:44:38.709068   68004 cli_runner.go:164] Run: docker network inspect ha-608611
	W1009 18:44:38.724919   68004 cli_runner.go:211] docker network inspect ha-608611 returned with exit code 1
	I1009 18:44:38.724948   68004 network_create.go:287] error running [docker network inspect ha-608611]: docker network inspect ha-608611: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-608611 not found
	I1009 18:44:38.724959   68004 network_create.go:289] output of [docker network inspect ha-608611]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-608611 not found
	
	** /stderr **
	I1009 18:44:38.725077   68004 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:44:38.743440   68004 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e06100}
	I1009 18:44:38.743492   68004 network_create.go:124] attempt to create docker network ha-608611 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 18:44:38.743548   68004 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-608611 ha-608611
	I1009 18:44:38.802772   68004 network_create.go:108] docker network ha-608611 192.168.49.0/24 created
	I1009 18:44:38.802822   68004 kic.go:121] calculated static IP "192.168.49.2" for the "ha-608611" container
	I1009 18:44:38.802881   68004 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:44:38.820080   68004 cli_runner.go:164] Run: docker volume create ha-608611 --label name.minikube.sigs.k8s.io=ha-608611 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:44:38.840522   68004 oci.go:103] Successfully created a docker volume ha-608611
	I1009 18:44:38.840615   68004 cli_runner.go:164] Run: docker run --rm --name ha-608611-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-608611 --entrypoint /usr/bin/test -v ha-608611:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 18:44:39.244353   68004 oci.go:107] Successfully prepared a docker volume ha-608611
	I1009 18:44:39.244424   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:39.244433   68004 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 18:44:39.244478   68004 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-608611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 18:44:43.640122   68004 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-608611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.39557595s)
	I1009 18:44:43.640175   68004 kic.go:203] duration metric: took 4.395736393s to extract preloaded images to volume ...
	W1009 18:44:43.640303   68004 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 18:44:43.640358   68004 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 18:44:43.640405   68004 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:44:43.696295   68004 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-608611 --name ha-608611 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-608611 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-608611 --network ha-608611 --ip 192.168.49.2 --volume ha-608611:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 18:44:43.979679   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Running}}
	I1009 18:44:43.998229   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.017435   68004 cli_runner.go:164] Run: docker exec ha-608611 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:44:44.066674   68004 oci.go:144] the created container "ha-608611" has a running status.
	I1009 18:44:44.066704   68004 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa...
	I1009 18:44:44.380025   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 18:44:44.380087   68004 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:44:44.405345   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.425476   68004 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:44:44.425501   68004 kic_runner.go:114] Args: [docker exec --privileged ha-608611 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:44:44.469260   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.488635   68004 machine.go:93] provisionDockerMachine start ...
	I1009 18:44:44.488729   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.507225   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.507570   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.507596   68004 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:44:44.655038   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:44:44.655067   68004 ubuntu.go:182] provisioning hostname "ha-608611"
	I1009 18:44:44.655128   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.673982   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.674208   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.674222   68004 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-608611 && echo "ha-608611" | sudo tee /etc/hostname
	I1009 18:44:44.830321   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:44:44.830415   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.848252   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.848464   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.848481   68004 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-608611' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-608611/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-608611' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:44:44.995953   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:44:44.995980   68004 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 18:44:44.995996   68004 ubuntu.go:190] setting up certificates
	I1009 18:44:44.996004   68004 provision.go:84] configureAuth start
	I1009 18:44:44.996061   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.014319   68004 provision.go:143] copyHostCerts
	I1009 18:44:45.014359   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:44:45.014401   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 18:44:45.014411   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:44:45.014491   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 18:44:45.014585   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:44:45.014614   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 18:44:45.014624   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:44:45.014668   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 18:44:45.014744   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:44:45.014769   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 18:44:45.014773   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:44:45.014812   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 18:44:45.014890   68004 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.ha-608611 san=[127.0.0.1 192.168.49.2 ha-608611 localhost minikube]
	I1009 18:44:45.062086   68004 provision.go:177] copyRemoteCerts
	I1009 18:44:45.062191   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:44:45.062224   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.079568   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.182503   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 18:44:45.182590   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 18:44:45.201898   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 18:44:45.201952   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 18:44:45.219004   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 18:44:45.219061   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:44:45.236354   68004 provision.go:87] duration metric: took 240.321663ms to configureAuth
	I1009 18:44:45.236386   68004 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:44:45.236591   68004 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:44:45.236715   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.255084   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:45.255329   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:45.255352   68004 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:44:45.508555   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:44:45.508584   68004 machine.go:96] duration metric: took 1.01992839s to provisionDockerMachine
	I1009 18:44:45.508595   68004 client.go:171] duration metric: took 6.817674141s to LocalClient.Create
	I1009 18:44:45.508615   68004 start.go:167] duration metric: took 6.817737923s to libmachine.API.Create "ha-608611"
	I1009 18:44:45.508627   68004 start.go:293] postStartSetup for "ha-608611" (driver="docker")
	I1009 18:44:45.508641   68004 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:44:45.508698   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:44:45.508733   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.526223   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.630313   68004 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:44:45.633862   68004 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:44:45.633886   68004 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:44:45.633896   68004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 18:44:45.633937   68004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 18:44:45.634010   68004 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 18:44:45.634020   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /etc/ssl/certs/148802.pem
	I1009 18:44:45.634128   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:44:45.641735   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:44:45.661588   68004 start.go:296] duration metric: took 152.943683ms for postStartSetup
	I1009 18:44:45.661893   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.680048   68004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:44:45.680316   68004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:44:45.680352   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.696877   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.796243   68004 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:44:45.800700   68004 start.go:128] duration metric: took 7.112375109s to createHost
	I1009 18:44:45.800729   68004 start.go:83] releasing machines lock for "ha-608611", held for 7.112518345s
	I1009 18:44:45.800791   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.818595   68004 ssh_runner.go:195] Run: cat /version.json
	I1009 18:44:45.818630   68004 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:44:45.818641   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.818688   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.836603   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.836837   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.989177   68004 ssh_runner.go:195] Run: systemctl --version
	I1009 18:44:45.995896   68004 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:44:46.030619   68004 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:44:46.035429   68004 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:44:46.035494   68004 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:44:46.061922   68004 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 18:44:46.061944   68004 start.go:495] detecting cgroup driver to use...
	I1009 18:44:46.061975   68004 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:44:46.062026   68004 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:44:46.077423   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:44:46.089316   68004 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:44:46.089367   68004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:44:46.105696   68004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:44:46.122777   68004 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:44:46.202639   68004 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:44:46.294647   68004 docker.go:234] disabling docker service ...
	I1009 18:44:46.294704   68004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:44:46.312549   68004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:44:46.324800   68004 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:44:46.403433   68004 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:44:46.481222   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:44:46.493645   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:44:46.507931   68004 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:44:46.507979   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.518504   68004 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 18:44:46.518561   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.527328   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.535888   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.544437   68004 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:44:46.552112   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.560275   68004 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.573155   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.581642   68004 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:44:46.588485   68004 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:44:46.595486   68004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:44:46.674187   68004 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:44:46.778236   68004 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:44:46.778294   68004 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:44:46.782264   68004 start.go:563] Will wait 60s for crictl version
	I1009 18:44:46.782319   68004 ssh_runner.go:195] Run: which crictl
	I1009 18:44:46.785887   68004 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:44:46.809717   68004 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:44:46.809792   68004 ssh_runner.go:195] Run: crio --version
	I1009 18:44:46.837446   68004 ssh_runner.go:195] Run: crio --version
	I1009 18:44:46.867516   68004 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:44:46.869002   68004 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:44:46.886298   68004 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:44:46.890354   68004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:44:46.901206   68004 kubeadm.go:883] updating cluster {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:44:46.901331   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:46.901390   68004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:44:46.933183   68004 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:44:46.933203   68004 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:44:46.933255   68004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:44:46.959025   68004 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:44:46.959053   68004 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:44:46.959062   68004 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 18:44:46.959174   68004 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-608611 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:44:46.959248   68004 ssh_runner.go:195] Run: crio config
	I1009 18:44:47.005223   68004 cni.go:84] Creating CNI manager for ""
	I1009 18:44:47.005245   68004 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 18:44:47.005269   68004 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:44:47.005302   68004 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-608611 NodeName:ha-608611 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:44:47.005420   68004 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-608611"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:44:47.005441   68004 kube-vip.go:115] generating kube-vip config ...
	I1009 18:44:47.005483   68004 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 18:44:47.017646   68004 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:44:47.017751   68004 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1009 18:44:47.017813   68004 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:44:47.025763   68004 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:44:47.025815   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 18:44:47.033769   68004 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 18:44:47.046390   68004 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:44:47.062352   68004 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 18:44:47.075248   68004 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1009 18:44:47.090154   68004 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 18:44:47.093985   68004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:44:47.104234   68004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:44:47.185443   68004 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:44:47.207477   68004 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611 for IP: 192.168.49.2
	I1009 18:44:47.207503   68004 certs.go:195] generating shared ca certs ...
	I1009 18:44:47.207525   68004 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.207676   68004 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 18:44:47.207726   68004 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 18:44:47.207736   68004 certs.go:257] generating profile certs ...
	I1009 18:44:47.207784   68004 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key
	I1009 18:44:47.207802   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt with IP's: []
	I1009 18:44:47.296415   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt ...
	I1009 18:44:47.296444   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt: {Name:mka7495c49ff81b322387640c5f8be05bb8b97aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.296615   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key ...
	I1009 18:44:47.296627   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key: {Name:mk151a9783426d352762013576861912ee213cd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.296698   68004 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3
	I1009 18:44:47.296712   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1009 18:44:47.614912   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 ...
	I1009 18:44:47.614937   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3: {Name:mkf40b70da82ca6969886952002da4a653b30ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.615095   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3 ...
	I1009 18:44:47.615110   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3: {Name:mkd83b705c3cec74b71d7424d9484d8c52a44a8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.615192   68004 certs.go:382] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt
	I1009 18:44:47.615283   68004 certs.go:386] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3 -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key
	I1009 18:44:47.615388   68004 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key
	I1009 18:44:47.615408   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt with IP's: []
	I1009 18:44:47.855559   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt ...
	I1009 18:44:47.855590   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt: {Name:mkb45be1e91a0e10b00b60bd353288b3ec0a365b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.855750   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key ...
	I1009 18:44:47.855762   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key: {Name:mk173c05f4fc9659f1f76c6f2e2f3e956fd65bbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.855826   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 18:44:47.855839   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 18:44:47.855850   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 18:44:47.855863   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 18:44:47.855878   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 18:44:47.855890   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 18:44:47.855902   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 18:44:47.855914   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 18:44:47.855955   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 18:44:47.855989   68004 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 18:44:47.855998   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:44:47.856027   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 18:44:47.856050   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:44:47.856071   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 18:44:47.856108   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:44:47.856132   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:47.856159   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem -> /usr/share/ca-certificates/14880.pem
	I1009 18:44:47.856171   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /usr/share/ca-certificates/148802.pem
	I1009 18:44:47.856652   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:44:47.875170   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:44:47.892939   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:44:47.910593   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:44:47.927971   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 18:44:47.945367   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:44:47.962453   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:44:47.979768   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:44:47.996498   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:44:48.015667   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 18:44:48.032775   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 18:44:48.049777   68004 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:44:48.062232   68004 ssh_runner.go:195] Run: openssl version
	I1009 18:44:48.068333   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 18:44:48.076746   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.080306   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.080361   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.114497   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:44:48.123987   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:44:48.134109   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.138265   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.138325   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.173947   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:44:48.182505   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 18:44:48.190879   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.194449   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.194493   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.227813   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 18:44:48.236520   68004 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:44:48.239954   68004 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 18:44:48.240015   68004 kubeadm.go:400] StartCluster: {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:44:48.240093   68004 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:44:48.240133   68004 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:44:48.266457   68004 cri.go:89] found id: ""
	I1009 18:44:48.266520   68004 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:44:48.274981   68004 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:44:48.282927   68004 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:44:48.282975   68004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:44:48.290558   68004 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:44:48.290617   68004 kubeadm.go:157] found existing configuration files:
	
	I1009 18:44:48.290662   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:44:48.297883   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:44:48.297940   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:44:48.305298   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:44:48.312630   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:44:48.312685   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:44:48.320277   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:44:48.328028   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:44:48.328075   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:44:48.335714   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:44:48.343631   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:44:48.343682   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:44:48.351389   68004 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:44:48.409985   68004 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:44:48.468687   68004 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:48:52.176412   68004 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1009 18:48:52.176606   68004 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:48:52.179343   68004 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:48:52.179469   68004 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:48:52.179692   68004 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:48:52.179825   68004 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:48:52.179919   68004 kubeadm.go:318] OS: Linux
	I1009 18:48:52.180033   68004 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:48:52.180167   68004 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:48:52.180261   68004 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:48:52.180339   68004 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:48:52.180423   68004 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:48:52.180506   68004 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:48:52.180585   68004 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:48:52.180650   68004 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:48:52.180730   68004 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:48:52.180858   68004 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:48:52.181038   68004 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:48:52.181129   68004 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:48:52.183066   68004 out.go:252]   - Generating certificates and keys ...
	I1009 18:48:52.183199   68004 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:48:52.183278   68004 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:48:52.183337   68004 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 18:48:52.183388   68004 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 18:48:52.183456   68004 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 18:48:52.183531   68004 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 18:48:52.183609   68004 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 18:48:52.183734   68004 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:48:52.183814   68004 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 18:48:52.183946   68004 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:48:52.184022   68004 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 18:48:52.184077   68004 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 18:48:52.184120   68004 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 18:48:52.184209   68004 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:48:52.184289   68004 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:48:52.184373   68004 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:48:52.184446   68004 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:48:52.184545   68004 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:48:52.184650   68004 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:48:52.184751   68004 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:48:52.184845   68004 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:48:52.187212   68004 out.go:252]   - Booting up control plane ...
	I1009 18:48:52.187314   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:48:52.187403   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:48:52.187495   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:48:52.187618   68004 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:48:52.187764   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:48:52.187905   68004 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:48:52.188016   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:48:52.188092   68004 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:48:52.188271   68004 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:48:52.188367   68004 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:48:52.188438   68004 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001064091s
	I1009 18:48:52.188532   68004 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:48:52.188631   68004 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 18:48:52.188753   68004 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:48:52.188835   68004 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:48:52.188944   68004 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00065849s
	I1009 18:48:52.189053   68004 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000822023s
	I1009 18:48:52.189176   68004 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00103559s
	I1009 18:48:52.189186   68004 kubeadm.go:318] 
	I1009 18:48:52.189288   68004 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:48:52.189417   68004 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:48:52.189507   68004 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:48:52.189604   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:48:52.189710   68004 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:48:52.189827   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:48:52.189851   68004 kubeadm.go:318] 
	W1009 18:48:52.189997   68004 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001064091s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00065849s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000822023s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00103559s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 18:48:52.190074   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 18:48:54.957990   68004 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.767888592s)
	I1009 18:48:54.958062   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:48:54.971165   68004 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:48:54.971216   68004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:48:54.979630   68004 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:48:54.979649   68004 kubeadm.go:157] found existing configuration files:
	
	I1009 18:48:54.979696   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:48:54.987819   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:48:54.987884   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:48:54.995953   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:48:55.003976   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:48:55.004081   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:48:55.011851   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:48:55.019991   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:48:55.020043   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:48:55.027959   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:48:55.036070   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:48:55.036117   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:48:55.043823   68004 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:48:55.102132   68004 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:48:55.161990   68004 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:52:58.820119   68004 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 18:52:58.820247   68004 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:52:58.823463   68004 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:52:58.823551   68004 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:52:58.823686   68004 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:52:58.823770   68004 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:52:58.823834   68004 kubeadm.go:318] OS: Linux
	I1009 18:52:58.823882   68004 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:52:58.823967   68004 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:52:58.824039   68004 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:52:58.824112   68004 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:52:58.824209   68004 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:52:58.824278   68004 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:52:58.824339   68004 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:52:58.824385   68004 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:52:58.824446   68004 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:52:58.824525   68004 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:52:58.824621   68004 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:52:58.824718   68004 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:52:58.828177   68004 out.go:252]   - Generating certificates and keys ...
	I1009 18:52:58.828267   68004 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:52:58.828359   68004 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:52:58.828476   68004 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 18:52:58.828530   68004 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 18:52:58.828586   68004 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 18:52:58.828629   68004 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 18:52:58.828684   68004 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 18:52:58.828737   68004 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 18:52:58.828800   68004 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 18:52:58.828859   68004 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 18:52:58.828890   68004 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 18:52:58.828973   68004 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:52:58.829058   68004 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:52:58.829168   68004 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:52:58.829228   68004 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:52:58.829307   68004 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:52:58.829375   68004 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:52:58.829446   68004 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:52:58.829507   68004 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:52:58.830918   68004 out.go:252]   - Booting up control plane ...
	I1009 18:52:58.831004   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:52:58.831088   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:52:58.831162   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:52:58.831271   68004 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:52:58.831374   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:52:58.831475   68004 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:52:58.831547   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:52:58.831602   68004 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:52:58.831715   68004 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:52:58.831812   68004 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:52:58.831876   68004 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000946171s
	I1009 18:52:58.831960   68004 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:52:58.832028   68004 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 18:52:58.832113   68004 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:52:58.832207   68004 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:52:58.832277   68004 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	I1009 18:52:58.832347   68004 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	I1009 18:52:58.832422   68004 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	I1009 18:52:58.832428   68004 kubeadm.go:318] 
	I1009 18:52:58.832506   68004 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:52:58.832579   68004 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:52:58.832656   68004 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:52:58.832741   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:52:58.832805   68004 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:52:58.832888   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:52:58.832970   68004 kubeadm.go:402] duration metric: took 8m10.592960723s to StartCluster
	I1009 18:52:58.832981   68004 kubeadm.go:318] 
	I1009 18:52:58.833031   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:52:58.833085   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:52:58.861225   68004 cri.go:89] found id: ""
	I1009 18:52:58.861266   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.861281   68004 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:52:58.861287   68004 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:52:58.861341   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:52:58.888167   68004 cri.go:89] found id: ""
	I1009 18:52:58.888195   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.888205   68004 logs.go:284] No container was found matching "etcd"
	I1009 18:52:58.888212   68004 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:52:58.888287   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:52:58.914349   68004 cri.go:89] found id: ""
	I1009 18:52:58.914374   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.914384   68004 logs.go:284] No container was found matching "coredns"
	I1009 18:52:58.914390   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:52:58.914453   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:52:58.940856   68004 cri.go:89] found id: ""
	I1009 18:52:58.940884   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.940892   68004 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:52:58.940898   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:52:58.940949   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:52:58.967634   68004 cri.go:89] found id: ""
	I1009 18:52:58.967660   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.967668   68004 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:52:58.967675   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:52:58.967737   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:52:58.994857   68004 cri.go:89] found id: ""
	I1009 18:52:58.994884   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.994892   68004 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:52:58.994897   68004 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:52:58.994951   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:52:59.022250   68004 cri.go:89] found id: ""
	I1009 18:52:59.022280   68004 logs.go:282] 0 containers: []
	W1009 18:52:59.022296   68004 logs.go:284] No container was found matching "kindnet"
	I1009 18:52:59.022305   68004 logs.go:123] Gathering logs for container status ...
	I1009 18:52:59.022316   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:52:59.050362   68004 logs.go:123] Gathering logs for kubelet ...
	I1009 18:52:59.050466   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:52:59.114521   68004 logs.go:123] Gathering logs for dmesg ...
	I1009 18:52:59.114560   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:52:59.126721   68004 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:52:59.126746   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:52:59.184497   68004 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:52:59.177217    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.177807    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179451    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179888    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.181458    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:52:59.177217    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.177807    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179451    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179888    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.181458    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:52:59.184526   68004 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:52:59.184536   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1009 18:52:59.243650   68004 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 18:52:59.243716   68004 out.go:285] * 
	W1009 18:52:59.243784   68004 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:52:59.243799   68004 out.go:285] * 
	W1009 18:52:59.245479   68004 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:52:59.249165   68004 out.go:203] 
	W1009 18:52:59.250590   68004 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:52:59.250620   68004 out.go:285] * 
	I1009 18:52:59.252112   68004 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 18:54:38 ha-608611 crio[779]: time="2025-10-09T18:54:38.463874433Z" level=info msg="createCtr: removing container 1f773326f9a9078eb5d1abe1ab99b36cbcae3da5113f26b735aa5ab717d5c059" id=bea9fa44-33c4-4c47-b7c8-0cfa7c746858 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:38 ha-608611 crio[779]: time="2025-10-09T18:54:38.46390603Z" level=info msg="createCtr: deleting container 1f773326f9a9078eb5d1abe1ab99b36cbcae3da5113f26b735aa5ab717d5c059 from storage" id=bea9fa44-33c4-4c47-b7c8-0cfa7c746858 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:38 ha-608611 crio[779]: time="2025-10-09T18:54:38.465938625Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-608611_kube-system_aa829d6ea417a48ecaa6f5cad3254d94_0" id=bea9fa44-33c4-4c47-b7c8-0cfa7c746858 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.441562226Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=5a6f584d-307d-492b-a663-2ac01c27f2ee name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.442494382Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=76f712ad-5145-4cc4-a27e-b1fa376b76ca name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.443368198Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-608611/kube-controller-manager" id=8917b73c-c4e6-4e87-8d87-409c0fa122c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.443580222Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.446973039Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.447404129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.46683003Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8917b73c-c4e6-4e87-8d87-409c0fa122c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.468112955Z" level=info msg="createCtr: deleting container ID 5b0ee951a8d30dc41cbe0e80f8fd65534c65d3a6b97e8d5542e2681b411dba7d from idIndex" id=8917b73c-c4e6-4e87-8d87-409c0fa122c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.468168675Z" level=info msg="createCtr: removing container 5b0ee951a8d30dc41cbe0e80f8fd65534c65d3a6b97e8d5542e2681b411dba7d" id=8917b73c-c4e6-4e87-8d87-409c0fa122c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.468207713Z" level=info msg="createCtr: deleting container 5b0ee951a8d30dc41cbe0e80f8fd65534c65d3a6b97e8d5542e2681b411dba7d from storage" id=8917b73c-c4e6-4e87-8d87-409c0fa122c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.470223387Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-608611_kube-system_cc9d45d79042caf53449ab6317965aad_0" id=8917b73c-c4e6-4e87-8d87-409c0fa122c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.441918254Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=5f830916-7502-45c7-a992-b1afe6a4ec2f name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.442961662Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=ce442719-daad-4875-88bf-1eae8be1d0eb name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.443900487Z" level=info msg="Creating container: kube-system/etcd-ha-608611/etcd" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.444174088Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.448745276Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.449318807Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.46398444Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.465375584Z" level=info msg="createCtr: deleting container ID 83743aebcddc36aef5c02af3dcd233f5d07925ba9d0281ad1316ac7a648aa44c from idIndex" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.465420508Z" level=info msg="createCtr: removing container 83743aebcddc36aef5c02af3dcd233f5d07925ba9d0281ad1316ac7a648aa44c" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.465459824Z" level=info msg="createCtr: deleting container 83743aebcddc36aef5c02af3dcd233f5d07925ba9d0281ad1316ac7a648aa44c from storage" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.467757138Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-608611_kube-system_b479c8e1034fd1754049af8325a8c50b_0" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:54:43.536193    3194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:43.536719    3194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:43.538326    3194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:43.538734    3194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:43.540270    3194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:54:43 up  1:37,  0 user,  load average: 0.00, 0.05, 0.07
	Linux ha-608611 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 18:54:38 ha-608611 kubelet[1930]:  > podSandboxID="770c3dd955a8e4513f9e5b862a3cb7f1d4ff6ebd095626539e3d2eb18ba246dc"
	Oct 09 18:54:38 ha-608611 kubelet[1930]: E1009 18:54:38.466354    1930 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:54:38 ha-608611 kubelet[1930]:         container kube-scheduler start failed in pod kube-scheduler-ha-608611_kube-system(aa829d6ea417a48ecaa6f5cad3254d94): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:38 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:54:38 ha-608611 kubelet[1930]: E1009 18:54:38.466380    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-608611" podUID="aa829d6ea417a48ecaa6f5cad3254d94"
	Oct 09 18:54:38 ha-608611 kubelet[1930]: E1009 18:54:38.621301    1930 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 09 18:54:40 ha-608611 kubelet[1930]: E1009 18:54:40.079705    1930 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-608611?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 18:54:40 ha-608611 kubelet[1930]: I1009 18:54:40.247858    1930 kubelet_node_status.go:75] "Attempting to register node" node="ha-608611"
	Oct 09 18:54:40 ha-608611 kubelet[1930]: E1009 18:54:40.248295    1930 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-608611"
	Oct 09 18:54:40 ha-608611 kubelet[1930]: E1009 18:54:40.441066    1930 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 18:54:40 ha-608611 kubelet[1930]: E1009 18:54:40.470513    1930 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:54:40 ha-608611 kubelet[1930]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:40 ha-608611 kubelet[1930]:  > podSandboxID="2ef2b90afa617b399f6036f17dc5f1152d378da5043adff2fc3afde192bc8693"
	Oct 09 18:54:40 ha-608611 kubelet[1930]: E1009 18:54:40.470610    1930 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:54:40 ha-608611 kubelet[1930]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-608611_kube-system(cc9d45d79042caf53449ab6317965aad): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:40 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:54:40 ha-608611 kubelet[1930]: E1009 18:54:40.470638    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-608611" podUID="cc9d45d79042caf53449ab6317965aad"
	Oct 09 18:54:41 ha-608611 kubelet[1930]: E1009 18:54:41.441458    1930 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 18:54:41 ha-608611 kubelet[1930]: E1009 18:54:41.468106    1930 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:54:41 ha-608611 kubelet[1930]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:41 ha-608611 kubelet[1930]:  > podSandboxID="85e631b34b7cd8e30736ecbe7d81581bf5cedb0c5abd8815458e28a54592f51e"
	Oct 09 18:54:41 ha-608611 kubelet[1930]: E1009 18:54:41.468242    1930 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:54:41 ha-608611 kubelet[1930]:         container etcd start failed in pod etcd-ha-608611_kube-system(b479c8e1034fd1754049af8325a8c50b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:41 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:54:41 ha-608611 kubelet[1930]: E1009 18:54:41.468280    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-608611" podUID="b479c8e1034fd1754049af8325a8c50b"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611: exit status 6 (291.288137ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:54:43.899544   75300 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-608611" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (1.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 node add --alsologtostderr -v 5: exit status 103 (247.43628ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-608611 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-608611"

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:54:43.956758   75413 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:54:43.957015   75413 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:54:43.957024   75413 out.go:374] Setting ErrFile to fd 2...
	I1009 18:54:43.957028   75413 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:54:43.957215   75413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:54:43.957493   75413 mustload.go:65] Loading cluster: ha-608611
	I1009 18:54:43.957808   75413 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:54:43.958197   75413 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:54:43.976248   75413 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:54:43.976542   75413 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:54:44.030396   75413 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:54:44.020507346 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:54:44.030536   75413 api_server.go:166] Checking apiserver status ...
	I1009 18:54:44.030589   75413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:54:44.030634   75413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:54:44.048398   75413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	W1009 18:54:44.152994   75413 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:54:44.154984   75413 out.go:179] * The control-plane node ha-608611 apiserver is not running: (state=Stopped)
	I1009 18:54:44.156323   75413 out.go:179]   To start a cluster, run: "minikube start -p ha-608611"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-608611 node add --alsologtostderr -v 5" : exit status 103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-608611
helpers_test.go:243: (dbg) docker inspect ha-608611:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	        "Created": "2025-10-09T18:44:43.71277862Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 68571,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:44:43.760299717Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hostname",
	        "HostsPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hosts",
	        "LogPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c-json.log",
	        "Name": "/ha-608611",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-608611:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-608611",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	                "LowerDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-608611",
	                "Source": "/var/lib/docker/volumes/ha-608611/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-608611",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-608611",
	                "name.minikube.sigs.k8s.io": "ha-608611",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4f6557069285c9379d4788b404b85a7f7332b0f0915fb426eb2d3ffb6f02df65",
	            "SandboxKey": "/var/run/docker/netns/4f6557069285",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-608611": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:dc:55:21:78:3f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d41ad8abecfe5e57fea462a2d7f6665aa3879de8bfc3fe0269f712186c14e257",
	                    "EndpointID": "322add21e309d24bef79b6b7f428ea8a1994c3d46e02d36bb4debf9950e6c0a5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-608611",
	                        "92fc23109156"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611: exit status 6 (283.72813ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:54:44.449040   75518 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/AddWorkerNode logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image          │ functional-753440 image ls                                                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ delete         │ -p functional-753440                                                                                            │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:44 UTC │ 09 Oct 25 18:44 UTC │
	│ start          │ ha-608611 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:44 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- rollout status deployment/busybox                                                          │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node           │ ha-608611 node add --alsologtostderr -v 5                                                                       │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:44:38
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:44:38.499708   68004 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:44:38.499979   68004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:44:38.499990   68004 out.go:374] Setting ErrFile to fd 2...
	I1009 18:44:38.499995   68004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:44:38.500193   68004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:44:38.500672   68004 out.go:368] Setting JSON to false
	I1009 18:44:38.501534   68004 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5226,"bootTime":1760030252,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:44:38.501651   68004 start.go:141] virtualization: kvm guest
	I1009 18:44:38.503753   68004 out.go:179] * [ha-608611] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:44:38.505161   68004 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:44:38.505174   68004 notify.go:220] Checking for updates...
	I1009 18:44:38.507971   68004 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:44:38.509361   68004 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:44:38.510823   68004 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:44:38.512241   68004 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:44:38.513815   68004 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:44:38.515465   68004 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:44:38.539241   68004 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:44:38.539344   68004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:44:38.597491   68004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:44:38.585969456 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:44:38.597607   68004 docker.go:318] overlay module found
	I1009 18:44:38.599712   68004 out.go:179] * Using the docker driver based on user configuration
	I1009 18:44:38.601190   68004 start.go:305] selected driver: docker
	I1009 18:44:38.601208   68004 start.go:925] validating driver "docker" against <nil>
	I1009 18:44:38.601220   68004 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:44:38.601773   68004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:44:38.656624   68004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:44:38.646723999 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:44:38.656772   68004 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 18:44:38.656973   68004 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:44:38.658777   68004 out.go:179] * Using Docker driver with root privileges
	I1009 18:44:38.660475   68004 cni.go:84] Creating CNI manager for ""
	I1009 18:44:38.660538   68004 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 18:44:38.660548   68004 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:44:38.660625   68004 start.go:349] cluster config:
	{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1009 18:44:38.662228   68004 out.go:179] * Starting "ha-608611" primary control-plane node in "ha-608611" cluster
	I1009 18:44:38.663758   68004 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:44:38.665163   68004 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:44:38.666518   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:38.666553   68004 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:44:38.666561   68004 cache.go:64] Caching tarball of preloaded images
	I1009 18:44:38.666652   68004 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:44:38.666665   68004 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:44:38.666636   68004 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:44:38.667052   68004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:44:38.667080   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json: {Name:mk7eb36c0f629760ce25ed6ea0be36fe97501d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:38.687956   68004 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 18:44:38.687977   68004 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 18:44:38.687999   68004 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:44:38.688029   68004 start.go:360] acquireMachinesLock for ha-608611: {Name:mk7579977ab708dc80cadd5f1683dbd9d0a08d4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:44:38.688196   68004 start.go:364] duration metric: took 118.358µs to acquireMachinesLock for "ha-608611"
	I1009 18:44:38.688228   68004 start.go:93] Provisioning new machine with config: &{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:44:38.688308   68004 start.go:125] createHost starting for "" (driver="docker")
	I1009 18:44:38.690596   68004 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 18:44:38.690877   68004 start.go:159] libmachine.API.Create for "ha-608611" (driver="docker")
	I1009 18:44:38.690915   68004 client.go:168] LocalClient.Create starting
	I1009 18:44:38.691016   68004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem
	I1009 18:44:38.691065   68004 main.go:141] libmachine: Decoding PEM data...
	I1009 18:44:38.691090   68004 main.go:141] libmachine: Parsing certificate...
	I1009 18:44:38.691160   68004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem
	I1009 18:44:38.691207   68004 main.go:141] libmachine: Decoding PEM data...
	I1009 18:44:38.691219   68004 main.go:141] libmachine: Parsing certificate...
	I1009 18:44:38.691649   68004 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:44:38.708961   68004 cli_runner.go:211] docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:44:38.709049   68004 network_create.go:284] running [docker network inspect ha-608611] to gather additional debugging logs...
	I1009 18:44:38.709068   68004 cli_runner.go:164] Run: docker network inspect ha-608611
	W1009 18:44:38.724919   68004 cli_runner.go:211] docker network inspect ha-608611 returned with exit code 1
	I1009 18:44:38.724948   68004 network_create.go:287] error running [docker network inspect ha-608611]: docker network inspect ha-608611: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-608611 not found
	I1009 18:44:38.724959   68004 network_create.go:289] output of [docker network inspect ha-608611]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-608611 not found
	
	** /stderr **
	I1009 18:44:38.725077   68004 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:44:38.743440   68004 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e06100}
	I1009 18:44:38.743492   68004 network_create.go:124] attempt to create docker network ha-608611 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 18:44:38.743548   68004 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-608611 ha-608611
	I1009 18:44:38.802772   68004 network_create.go:108] docker network ha-608611 192.168.49.0/24 created
	I1009 18:44:38.802822   68004 kic.go:121] calculated static IP "192.168.49.2" for the "ha-608611" container
	I1009 18:44:38.802881   68004 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:44:38.820080   68004 cli_runner.go:164] Run: docker volume create ha-608611 --label name.minikube.sigs.k8s.io=ha-608611 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:44:38.840522   68004 oci.go:103] Successfully created a docker volume ha-608611
	I1009 18:44:38.840615   68004 cli_runner.go:164] Run: docker run --rm --name ha-608611-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-608611 --entrypoint /usr/bin/test -v ha-608611:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 18:44:39.244353   68004 oci.go:107] Successfully prepared a docker volume ha-608611
	I1009 18:44:39.244424   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:39.244433   68004 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 18:44:39.244478   68004 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-608611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 18:44:43.640122   68004 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-608611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.39557595s)
	I1009 18:44:43.640175   68004 kic.go:203] duration metric: took 4.395736393s to extract preloaded images to volume ...
	W1009 18:44:43.640303   68004 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 18:44:43.640358   68004 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 18:44:43.640405   68004 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:44:43.696295   68004 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-608611 --name ha-608611 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-608611 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-608611 --network ha-608611 --ip 192.168.49.2 --volume ha-608611:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 18:44:43.979679   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Running}}
	I1009 18:44:43.998229   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.017435   68004 cli_runner.go:164] Run: docker exec ha-608611 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:44:44.066674   68004 oci.go:144] the created container "ha-608611" has a running status.
	I1009 18:44:44.066704   68004 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa...
	I1009 18:44:44.380025   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 18:44:44.380087   68004 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:44:44.405345   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.425476   68004 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:44:44.425501   68004 kic_runner.go:114] Args: [docker exec --privileged ha-608611 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:44:44.469260   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.488635   68004 machine.go:93] provisionDockerMachine start ...
	I1009 18:44:44.488729   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.507225   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.507570   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.507596   68004 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:44:44.655038   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:44:44.655067   68004 ubuntu.go:182] provisioning hostname "ha-608611"
	I1009 18:44:44.655128   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.673982   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.674208   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.674222   68004 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-608611 && echo "ha-608611" | sudo tee /etc/hostname
	I1009 18:44:44.830321   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:44:44.830415   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.848252   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.848464   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.848481   68004 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-608611' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-608611/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-608611' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:44:44.995953   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:44:44.995980   68004 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 18:44:44.995996   68004 ubuntu.go:190] setting up certificates
	I1009 18:44:44.996004   68004 provision.go:84] configureAuth start
	I1009 18:44:44.996061   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.014319   68004 provision.go:143] copyHostCerts
	I1009 18:44:45.014359   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:44:45.014401   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 18:44:45.014411   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:44:45.014491   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 18:44:45.014585   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:44:45.014614   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 18:44:45.014624   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:44:45.014668   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 18:44:45.014744   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:44:45.014769   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 18:44:45.014773   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:44:45.014812   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 18:44:45.014890   68004 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.ha-608611 san=[127.0.0.1 192.168.49.2 ha-608611 localhost minikube]
	I1009 18:44:45.062086   68004 provision.go:177] copyRemoteCerts
	I1009 18:44:45.062191   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:44:45.062224   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.079568   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.182503   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 18:44:45.182590   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 18:44:45.201898   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 18:44:45.201952   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 18:44:45.219004   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 18:44:45.219061   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:44:45.236354   68004 provision.go:87] duration metric: took 240.321663ms to configureAuth
	I1009 18:44:45.236386   68004 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:44:45.236591   68004 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:44:45.236715   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.255084   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:45.255329   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:45.255352   68004 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:44:45.508555   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:44:45.508584   68004 machine.go:96] duration metric: took 1.01992839s to provisionDockerMachine
	I1009 18:44:45.508595   68004 client.go:171] duration metric: took 6.817674141s to LocalClient.Create
	I1009 18:44:45.508615   68004 start.go:167] duration metric: took 6.817737923s to libmachine.API.Create "ha-608611"
	I1009 18:44:45.508627   68004 start.go:293] postStartSetup for "ha-608611" (driver="docker")
	I1009 18:44:45.508641   68004 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:44:45.508698   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:44:45.508733   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.526223   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.630313   68004 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:44:45.633862   68004 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:44:45.633886   68004 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:44:45.633896   68004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 18:44:45.633937   68004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 18:44:45.634010   68004 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 18:44:45.634020   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /etc/ssl/certs/148802.pem
	I1009 18:44:45.634128   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:44:45.641735   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:44:45.661588   68004 start.go:296] duration metric: took 152.943683ms for postStartSetup
	I1009 18:44:45.661893   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.680048   68004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:44:45.680316   68004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:44:45.680352   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.696877   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.796243   68004 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:44:45.800700   68004 start.go:128] duration metric: took 7.112375109s to createHost
	I1009 18:44:45.800729   68004 start.go:83] releasing machines lock for "ha-608611", held for 7.112518345s
	I1009 18:44:45.800791   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.818595   68004 ssh_runner.go:195] Run: cat /version.json
	I1009 18:44:45.818630   68004 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:44:45.818641   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.818688   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.836603   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.836837   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.989177   68004 ssh_runner.go:195] Run: systemctl --version
	I1009 18:44:45.995896   68004 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:44:46.030619   68004 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:44:46.035429   68004 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:44:46.035494   68004 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:44:46.061922   68004 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 18:44:46.061944   68004 start.go:495] detecting cgroup driver to use...
	I1009 18:44:46.061975   68004 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:44:46.062026   68004 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:44:46.077423   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:44:46.089316   68004 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:44:46.089367   68004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:44:46.105696   68004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:44:46.122777   68004 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:44:46.202639   68004 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:44:46.294647   68004 docker.go:234] disabling docker service ...
	I1009 18:44:46.294704   68004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:44:46.312549   68004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:44:46.324800   68004 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:44:46.403433   68004 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:44:46.481222   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:44:46.493645   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:44:46.507931   68004 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:44:46.507979   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.518504   68004 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 18:44:46.518561   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.527328   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.535888   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.544437   68004 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:44:46.552112   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.560275   68004 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.573155   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.581642   68004 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:44:46.588485   68004 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:44:46.595486   68004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:44:46.674187   68004 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:44:46.778236   68004 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:44:46.778294   68004 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:44:46.782264   68004 start.go:563] Will wait 60s for crictl version
	I1009 18:44:46.782319   68004 ssh_runner.go:195] Run: which crictl
	I1009 18:44:46.785887   68004 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:44:46.809717   68004 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:44:46.809792   68004 ssh_runner.go:195] Run: crio --version
	I1009 18:44:46.837446   68004 ssh_runner.go:195] Run: crio --version
	I1009 18:44:46.867516   68004 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:44:46.869002   68004 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:44:46.886298   68004 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:44:46.890354   68004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:44:46.901206   68004 kubeadm.go:883] updating cluster {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:44:46.901331   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:46.901390   68004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:44:46.933183   68004 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:44:46.933203   68004 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:44:46.933255   68004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:44:46.959025   68004 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:44:46.959053   68004 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:44:46.959062   68004 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 18:44:46.959174   68004 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-608611 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:44:46.959248   68004 ssh_runner.go:195] Run: crio config
	I1009 18:44:47.005223   68004 cni.go:84] Creating CNI manager for ""
	I1009 18:44:47.005245   68004 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 18:44:47.005269   68004 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:44:47.005302   68004 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-608611 NodeName:ha-608611 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:44:47.005420   68004 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-608611"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:44:47.005441   68004 kube-vip.go:115] generating kube-vip config ...
	I1009 18:44:47.005483   68004 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 18:44:47.017646   68004 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:44:47.017751   68004 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1009 18:44:47.017813   68004 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:44:47.025763   68004 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:44:47.025815   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 18:44:47.033769   68004 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 18:44:47.046390   68004 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:44:47.062352   68004 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 18:44:47.075248   68004 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1009 18:44:47.090154   68004 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 18:44:47.093985   68004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:44:47.104234   68004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:44:47.185443   68004 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:44:47.207477   68004 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611 for IP: 192.168.49.2
	I1009 18:44:47.207503   68004 certs.go:195] generating shared ca certs ...
	I1009 18:44:47.207525   68004 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.207676   68004 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 18:44:47.207726   68004 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 18:44:47.207736   68004 certs.go:257] generating profile certs ...
	I1009 18:44:47.207784   68004 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key
	I1009 18:44:47.207802   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt with IP's: []
	I1009 18:44:47.296415   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt ...
	I1009 18:44:47.296444   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt: {Name:mka7495c49ff81b322387640c5f8be05bb8b97aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.296615   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key ...
	I1009 18:44:47.296627   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key: {Name:mk151a9783426d352762013576861912ee213cd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.296698   68004 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3
	I1009 18:44:47.296712   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1009 18:44:47.614912   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 ...
	I1009 18:44:47.614937   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3: {Name:mkf40b70da82ca6969886952002da4a653b30ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.615095   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3 ...
	I1009 18:44:47.615110   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3: {Name:mkd83b705c3cec74b71d7424d9484d8c52a44a8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.615192   68004 certs.go:382] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt
	I1009 18:44:47.615283   68004 certs.go:386] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3 -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key
	I1009 18:44:47.615388   68004 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key
	I1009 18:44:47.615408   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt with IP's: []
	I1009 18:44:47.855559   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt ...
	I1009 18:44:47.855590   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt: {Name:mkb45be1e91a0e10b00b60bd353288b3ec0a365b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.855750   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key ...
	I1009 18:44:47.855762   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key: {Name:mk173c05f4fc9659f1f76c6f2e2f3e956fd65bbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.855826   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 18:44:47.855839   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 18:44:47.855850   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 18:44:47.855863   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 18:44:47.855878   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 18:44:47.855890   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 18:44:47.855902   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 18:44:47.855914   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 18:44:47.855955   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 18:44:47.855989   68004 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 18:44:47.855998   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:44:47.856027   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 18:44:47.856050   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:44:47.856071   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 18:44:47.856108   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:44:47.856132   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:47.856159   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem -> /usr/share/ca-certificates/14880.pem
	I1009 18:44:47.856171   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /usr/share/ca-certificates/148802.pem
	I1009 18:44:47.856652   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:44:47.875170   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:44:47.892939   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:44:47.910593   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:44:47.927971   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 18:44:47.945367   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:44:47.962453   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:44:47.979768   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:44:47.996498   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:44:48.015667   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 18:44:48.032775   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 18:44:48.049777   68004 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:44:48.062232   68004 ssh_runner.go:195] Run: openssl version
	I1009 18:44:48.068333   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 18:44:48.076746   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.080306   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.080361   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.114497   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:44:48.123987   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:44:48.134109   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.138265   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.138325   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.173947   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:44:48.182505   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 18:44:48.190879   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.194449   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.194493   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.227813   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 18:44:48.236520   68004 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:44:48.239954   68004 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 18:44:48.240015   68004 kubeadm.go:400] StartCluster: {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:44:48.240093   68004 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:44:48.240133   68004 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:44:48.266457   68004 cri.go:89] found id: ""
	I1009 18:44:48.266520   68004 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:44:48.274981   68004 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:44:48.282927   68004 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:44:48.282975   68004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:44:48.290558   68004 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:44:48.290617   68004 kubeadm.go:157] found existing configuration files:
	
	I1009 18:44:48.290662   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:44:48.297883   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:44:48.297940   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:44:48.305298   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:44:48.312630   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:44:48.312685   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:44:48.320277   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:44:48.328028   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:44:48.328075   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:44:48.335714   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:44:48.343631   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:44:48.343682   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:44:48.351389   68004 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:44:48.409985   68004 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:44:48.468687   68004 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:48:52.176412   68004 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1009 18:48:52.176606   68004 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:48:52.179343   68004 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:48:52.179469   68004 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:48:52.179692   68004 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:48:52.179825   68004 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:48:52.179919   68004 kubeadm.go:318] OS: Linux
	I1009 18:48:52.180033   68004 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:48:52.180167   68004 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:48:52.180261   68004 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:48:52.180339   68004 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:48:52.180423   68004 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:48:52.180506   68004 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:48:52.180585   68004 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:48:52.180650   68004 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:48:52.180730   68004 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:48:52.180858   68004 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:48:52.181038   68004 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:48:52.181129   68004 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:48:52.183066   68004 out.go:252]   - Generating certificates and keys ...
	I1009 18:48:52.183199   68004 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:48:52.183278   68004 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:48:52.183337   68004 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 18:48:52.183388   68004 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 18:48:52.183456   68004 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 18:48:52.183531   68004 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 18:48:52.183609   68004 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 18:48:52.183734   68004 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:48:52.183814   68004 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 18:48:52.183946   68004 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:48:52.184022   68004 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 18:48:52.184077   68004 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 18:48:52.184120   68004 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 18:48:52.184209   68004 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:48:52.184289   68004 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:48:52.184373   68004 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:48:52.184446   68004 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:48:52.184545   68004 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:48:52.184650   68004 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:48:52.184751   68004 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:48:52.184845   68004 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:48:52.187212   68004 out.go:252]   - Booting up control plane ...
	I1009 18:48:52.187314   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:48:52.187403   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:48:52.187495   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:48:52.187618   68004 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:48:52.187764   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:48:52.187905   68004 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:48:52.188016   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:48:52.188092   68004 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:48:52.188271   68004 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:48:52.188367   68004 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:48:52.188438   68004 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001064091s
	I1009 18:48:52.188532   68004 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:48:52.188631   68004 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 18:48:52.188753   68004 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:48:52.188835   68004 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:48:52.188944   68004 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00065849s
	I1009 18:48:52.189053   68004 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000822023s
	I1009 18:48:52.189176   68004 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00103559s
	I1009 18:48:52.189186   68004 kubeadm.go:318] 
	I1009 18:48:52.189288   68004 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:48:52.189417   68004 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:48:52.189507   68004 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:48:52.189604   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:48:52.189710   68004 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:48:52.189827   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:48:52.189851   68004 kubeadm.go:318] 
	W1009 18:48:52.189997   68004 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001064091s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00065849s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000822023s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00103559s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 18:48:52.190074   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 18:48:54.957990   68004 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.767888592s)
	I1009 18:48:54.958062   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:48:54.971165   68004 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:48:54.971216   68004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:48:54.979630   68004 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:48:54.979649   68004 kubeadm.go:157] found existing configuration files:
	
	I1009 18:48:54.979696   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:48:54.987819   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:48:54.987884   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:48:54.995953   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:48:55.003976   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:48:55.004081   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:48:55.011851   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:48:55.019991   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:48:55.020043   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:48:55.027959   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:48:55.036070   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:48:55.036117   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:48:55.043823   68004 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:48:55.102132   68004 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:48:55.161990   68004 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:52:58.820119   68004 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 18:52:58.820247   68004 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:52:58.823463   68004 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:52:58.823551   68004 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:52:58.823686   68004 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:52:58.823770   68004 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:52:58.823834   68004 kubeadm.go:318] OS: Linux
	I1009 18:52:58.823882   68004 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:52:58.823967   68004 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:52:58.824039   68004 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:52:58.824112   68004 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:52:58.824209   68004 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:52:58.824278   68004 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:52:58.824339   68004 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:52:58.824385   68004 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:52:58.824446   68004 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:52:58.824525   68004 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:52:58.824621   68004 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:52:58.824718   68004 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:52:58.828177   68004 out.go:252]   - Generating certificates and keys ...
	I1009 18:52:58.828267   68004 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:52:58.828359   68004 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:52:58.828476   68004 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 18:52:58.828530   68004 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 18:52:58.828586   68004 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 18:52:58.828629   68004 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 18:52:58.828684   68004 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 18:52:58.828737   68004 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 18:52:58.828800   68004 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 18:52:58.828859   68004 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 18:52:58.828890   68004 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 18:52:58.828973   68004 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:52:58.829058   68004 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:52:58.829168   68004 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:52:58.829228   68004 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:52:58.829307   68004 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:52:58.829375   68004 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:52:58.829446   68004 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:52:58.829507   68004 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:52:58.830918   68004 out.go:252]   - Booting up control plane ...
	I1009 18:52:58.831004   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:52:58.831088   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:52:58.831162   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:52:58.831271   68004 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:52:58.831374   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:52:58.831475   68004 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:52:58.831547   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:52:58.831602   68004 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:52:58.831715   68004 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:52:58.831812   68004 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:52:58.831876   68004 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000946171s
	I1009 18:52:58.831960   68004 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:52:58.832028   68004 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 18:52:58.832113   68004 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:52:58.832207   68004 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:52:58.832277   68004 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	I1009 18:52:58.832347   68004 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	I1009 18:52:58.832422   68004 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	I1009 18:52:58.832428   68004 kubeadm.go:318] 
	I1009 18:52:58.832506   68004 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:52:58.832579   68004 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:52:58.832656   68004 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:52:58.832741   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:52:58.832805   68004 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:52:58.832888   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:52:58.832970   68004 kubeadm.go:402] duration metric: took 8m10.592960723s to StartCluster
	I1009 18:52:58.832981   68004 kubeadm.go:318] 
	I1009 18:52:58.833031   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:52:58.833085   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:52:58.861225   68004 cri.go:89] found id: ""
	I1009 18:52:58.861266   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.861281   68004 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:52:58.861287   68004 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:52:58.861341   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:52:58.888167   68004 cri.go:89] found id: ""
	I1009 18:52:58.888195   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.888205   68004 logs.go:284] No container was found matching "etcd"
	I1009 18:52:58.888212   68004 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:52:58.888287   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:52:58.914349   68004 cri.go:89] found id: ""
	I1009 18:52:58.914374   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.914384   68004 logs.go:284] No container was found matching "coredns"
	I1009 18:52:58.914390   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:52:58.914453   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:52:58.940856   68004 cri.go:89] found id: ""
	I1009 18:52:58.940884   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.940892   68004 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:52:58.940898   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:52:58.940949   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:52:58.967634   68004 cri.go:89] found id: ""
	I1009 18:52:58.967660   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.967668   68004 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:52:58.967675   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:52:58.967737   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:52:58.994857   68004 cri.go:89] found id: ""
	I1009 18:52:58.994884   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.994892   68004 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:52:58.994897   68004 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:52:58.994951   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:52:59.022250   68004 cri.go:89] found id: ""
	I1009 18:52:59.022280   68004 logs.go:282] 0 containers: []
	W1009 18:52:59.022296   68004 logs.go:284] No container was found matching "kindnet"
	I1009 18:52:59.022305   68004 logs.go:123] Gathering logs for container status ...
	I1009 18:52:59.022316   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:52:59.050362   68004 logs.go:123] Gathering logs for kubelet ...
	I1009 18:52:59.050466   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:52:59.114521   68004 logs.go:123] Gathering logs for dmesg ...
	I1009 18:52:59.114560   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:52:59.126721   68004 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:52:59.126746   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:52:59.184497   68004 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:52:59.177217    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.177807    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179451    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179888    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.181458    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:52:59.177217    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.177807    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179451    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179888    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.181458    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:52:59.184526   68004 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:52:59.184536   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1009 18:52:59.243650   68004 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 18:52:59.243716   68004 out.go:285] * 
	W1009 18:52:59.243784   68004 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:52:59.243799   68004 out.go:285] * 
	W1009 18:52:59.245479   68004 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:52:59.249165   68004 out.go:203] 
	W1009 18:52:59.250590   68004 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:52:59.250620   68004 out.go:285] * 
	I1009 18:52:59.252112   68004 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 18:54:38 ha-608611 crio[779]: time="2025-10-09T18:54:38.463874433Z" level=info msg="createCtr: removing container 1f773326f9a9078eb5d1abe1ab99b36cbcae3da5113f26b735aa5ab717d5c059" id=bea9fa44-33c4-4c47-b7c8-0cfa7c746858 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:38 ha-608611 crio[779]: time="2025-10-09T18:54:38.46390603Z" level=info msg="createCtr: deleting container 1f773326f9a9078eb5d1abe1ab99b36cbcae3da5113f26b735aa5ab717d5c059 from storage" id=bea9fa44-33c4-4c47-b7c8-0cfa7c746858 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:38 ha-608611 crio[779]: time="2025-10-09T18:54:38.465938625Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-608611_kube-system_aa829d6ea417a48ecaa6f5cad3254d94_0" id=bea9fa44-33c4-4c47-b7c8-0cfa7c746858 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.441562226Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=5a6f584d-307d-492b-a663-2ac01c27f2ee name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.442494382Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=76f712ad-5145-4cc4-a27e-b1fa376b76ca name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.443368198Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-608611/kube-controller-manager" id=8917b73c-c4e6-4e87-8d87-409c0fa122c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.443580222Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.446973039Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.447404129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.46683003Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8917b73c-c4e6-4e87-8d87-409c0fa122c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.468112955Z" level=info msg="createCtr: deleting container ID 5b0ee951a8d30dc41cbe0e80f8fd65534c65d3a6b97e8d5542e2681b411dba7d from idIndex" id=8917b73c-c4e6-4e87-8d87-409c0fa122c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.468168675Z" level=info msg="createCtr: removing container 5b0ee951a8d30dc41cbe0e80f8fd65534c65d3a6b97e8d5542e2681b411dba7d" id=8917b73c-c4e6-4e87-8d87-409c0fa122c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.468207713Z" level=info msg="createCtr: deleting container 5b0ee951a8d30dc41cbe0e80f8fd65534c65d3a6b97e8d5542e2681b411dba7d from storage" id=8917b73c-c4e6-4e87-8d87-409c0fa122c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.470223387Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-608611_kube-system_cc9d45d79042caf53449ab6317965aad_0" id=8917b73c-c4e6-4e87-8d87-409c0fa122c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.441918254Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=5f830916-7502-45c7-a992-b1afe6a4ec2f name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.442961662Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=ce442719-daad-4875-88bf-1eae8be1d0eb name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.443900487Z" level=info msg="Creating container: kube-system/etcd-ha-608611/etcd" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.444174088Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.448745276Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.449318807Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.46398444Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.465375584Z" level=info msg="createCtr: deleting container ID 83743aebcddc36aef5c02af3dcd233f5d07925ba9d0281ad1316ac7a648aa44c from idIndex" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.465420508Z" level=info msg="createCtr: removing container 83743aebcddc36aef5c02af3dcd233f5d07925ba9d0281ad1316ac7a648aa44c" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.465459824Z" level=info msg="createCtr: deleting container 83743aebcddc36aef5c02af3dcd233f5d07925ba9d0281ad1316ac7a648aa44c from storage" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.467757138Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-608611_kube-system_b479c8e1034fd1754049af8325a8c50b_0" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:54:45.021275    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:45.021852    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:45.023551    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:45.024014    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:45.025709    3360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:54:45 up  1:37,  0 user,  load average: 0.08, 0.07, 0.08
	Linux ha-608611 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 18:54:38 ha-608611 kubelet[1930]:  > podSandboxID="770c3dd955a8e4513f9e5b862a3cb7f1d4ff6ebd095626539e3d2eb18ba246dc"
	Oct 09 18:54:38 ha-608611 kubelet[1930]: E1009 18:54:38.466354    1930 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:54:38 ha-608611 kubelet[1930]:         container kube-scheduler start failed in pod kube-scheduler-ha-608611_kube-system(aa829d6ea417a48ecaa6f5cad3254d94): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:38 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:54:38 ha-608611 kubelet[1930]: E1009 18:54:38.466380    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-608611" podUID="aa829d6ea417a48ecaa6f5cad3254d94"
	Oct 09 18:54:38 ha-608611 kubelet[1930]: E1009 18:54:38.621301    1930 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 09 18:54:40 ha-608611 kubelet[1930]: E1009 18:54:40.079705    1930 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-608611?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 18:54:40 ha-608611 kubelet[1930]: I1009 18:54:40.247858    1930 kubelet_node_status.go:75] "Attempting to register node" node="ha-608611"
	Oct 09 18:54:40 ha-608611 kubelet[1930]: E1009 18:54:40.248295    1930 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-608611"
	Oct 09 18:54:40 ha-608611 kubelet[1930]: E1009 18:54:40.441066    1930 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 18:54:40 ha-608611 kubelet[1930]: E1009 18:54:40.470513    1930 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:54:40 ha-608611 kubelet[1930]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:40 ha-608611 kubelet[1930]:  > podSandboxID="2ef2b90afa617b399f6036f17dc5f1152d378da5043adff2fc3afde192bc8693"
	Oct 09 18:54:40 ha-608611 kubelet[1930]: E1009 18:54:40.470610    1930 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:54:40 ha-608611 kubelet[1930]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-608611_kube-system(cc9d45d79042caf53449ab6317965aad): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:40 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:54:40 ha-608611 kubelet[1930]: E1009 18:54:40.470638    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-608611" podUID="cc9d45d79042caf53449ab6317965aad"
	Oct 09 18:54:41 ha-608611 kubelet[1930]: E1009 18:54:41.441458    1930 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 18:54:41 ha-608611 kubelet[1930]: E1009 18:54:41.468106    1930 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:54:41 ha-608611 kubelet[1930]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:41 ha-608611 kubelet[1930]:  > podSandboxID="85e631b34b7cd8e30736ecbe7d81581bf5cedb0c5abd8815458e28a54592f51e"
	Oct 09 18:54:41 ha-608611 kubelet[1930]: E1009 18:54:41.468242    1930 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:54:41 ha-608611 kubelet[1930]:         container etcd start failed in pod etcd-ha-608611_kube-system(b479c8e1034fd1754049af8325a8c50b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:41 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:54:41 ha-608611 kubelet[1930]: E1009 18:54:41.468280    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-608611" podUID="b479c8e1034fd1754049af8325a8c50b"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611: exit status 6 (297.21512ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:54:45.392700   75845 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-608611" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (1.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (1.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-608611 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-608611 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (47.291773ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-608611

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-608611 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-608611 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-608611
helpers_test.go:243: (dbg) docker inspect ha-608611:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	        "Created": "2025-10-09T18:44:43.71277862Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 68571,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:44:43.760299717Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hostname",
	        "HostsPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hosts",
	        "LogPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c-json.log",
	        "Name": "/ha-608611",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-608611:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-608611",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	                "LowerDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-608611",
	                "Source": "/var/lib/docker/volumes/ha-608611/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-608611",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-608611",
	                "name.minikube.sigs.k8s.io": "ha-608611",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4f6557069285c9379d4788b404b85a7f7332b0f0915fb426eb2d3ffb6f02df65",
	            "SandboxKey": "/var/run/docker/netns/4f6557069285",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-608611": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:dc:55:21:78:3f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d41ad8abecfe5e57fea462a2d7f6665aa3879de8bfc3fe0269f712186c14e257",
	                    "EndpointID": "322add21e309d24bef79b6b7f428ea8a1994c3d46e02d36bb4debf9950e6c0a5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-608611",
	                        "92fc23109156"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611: exit status 6 (284.189758ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:54:45.743770   75983 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image          │ functional-753440 image ls                                                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ delete         │ -p functional-753440                                                                                            │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:44 UTC │ 09 Oct 25 18:44 UTC │
	│ start          │ ha-608611 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:44 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- rollout status deployment/busybox                                                          │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node           │ ha-608611 node add --alsologtostderr -v 5                                                                       │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:44:38
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:44:38.499708   68004 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:44:38.499979   68004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:44:38.499990   68004 out.go:374] Setting ErrFile to fd 2...
	I1009 18:44:38.499995   68004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:44:38.500193   68004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:44:38.500672   68004 out.go:368] Setting JSON to false
	I1009 18:44:38.501534   68004 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5226,"bootTime":1760030252,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:44:38.501651   68004 start.go:141] virtualization: kvm guest
	I1009 18:44:38.503753   68004 out.go:179] * [ha-608611] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:44:38.505161   68004 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:44:38.505174   68004 notify.go:220] Checking for updates...
	I1009 18:44:38.507971   68004 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:44:38.509361   68004 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:44:38.510823   68004 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:44:38.512241   68004 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:44:38.513815   68004 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:44:38.515465   68004 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:44:38.539241   68004 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:44:38.539344   68004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:44:38.597491   68004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:44:38.585969456 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:44:38.597607   68004 docker.go:318] overlay module found
	I1009 18:44:38.599712   68004 out.go:179] * Using the docker driver based on user configuration
	I1009 18:44:38.601190   68004 start.go:305] selected driver: docker
	I1009 18:44:38.601208   68004 start.go:925] validating driver "docker" against <nil>
	I1009 18:44:38.601220   68004 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:44:38.601773   68004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:44:38.656624   68004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:44:38.646723999 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:44:38.656772   68004 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 18:44:38.656973   68004 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:44:38.658777   68004 out.go:179] * Using Docker driver with root privileges
	I1009 18:44:38.660475   68004 cni.go:84] Creating CNI manager for ""
	I1009 18:44:38.660538   68004 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 18:44:38.660548   68004 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:44:38.660625   68004 start.go:349] cluster config:
	{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1009 18:44:38.662228   68004 out.go:179] * Starting "ha-608611" primary control-plane node in "ha-608611" cluster
	I1009 18:44:38.663758   68004 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:44:38.665163   68004 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:44:38.666518   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:38.666553   68004 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:44:38.666561   68004 cache.go:64] Caching tarball of preloaded images
	I1009 18:44:38.666652   68004 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:44:38.666665   68004 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:44:38.666636   68004 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:44:38.667052   68004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:44:38.667080   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json: {Name:mk7eb36c0f629760ce25ed6ea0be36fe97501d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:38.687956   68004 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 18:44:38.687977   68004 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 18:44:38.687999   68004 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:44:38.688029   68004 start.go:360] acquireMachinesLock for ha-608611: {Name:mk7579977ab708dc80cadd5f1683dbd9d0a08d4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:44:38.688196   68004 start.go:364] duration metric: took 118.358µs to acquireMachinesLock for "ha-608611"
	I1009 18:44:38.688228   68004 start.go:93] Provisioning new machine with config: &{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:44:38.688308   68004 start.go:125] createHost starting for "" (driver="docker")
	I1009 18:44:38.690596   68004 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 18:44:38.690877   68004 start.go:159] libmachine.API.Create for "ha-608611" (driver="docker")
	I1009 18:44:38.690915   68004 client.go:168] LocalClient.Create starting
	I1009 18:44:38.691016   68004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem
	I1009 18:44:38.691065   68004 main.go:141] libmachine: Decoding PEM data...
	I1009 18:44:38.691090   68004 main.go:141] libmachine: Parsing certificate...
	I1009 18:44:38.691160   68004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem
	I1009 18:44:38.691207   68004 main.go:141] libmachine: Decoding PEM data...
	I1009 18:44:38.691219   68004 main.go:141] libmachine: Parsing certificate...
	I1009 18:44:38.691649   68004 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:44:38.708961   68004 cli_runner.go:211] docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:44:38.709049   68004 network_create.go:284] running [docker network inspect ha-608611] to gather additional debugging logs...
	I1009 18:44:38.709068   68004 cli_runner.go:164] Run: docker network inspect ha-608611
	W1009 18:44:38.724919   68004 cli_runner.go:211] docker network inspect ha-608611 returned with exit code 1
	I1009 18:44:38.724948   68004 network_create.go:287] error running [docker network inspect ha-608611]: docker network inspect ha-608611: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-608611 not found
	I1009 18:44:38.724959   68004 network_create.go:289] output of [docker network inspect ha-608611]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-608611 not found
	
	** /stderr **
	I1009 18:44:38.725077   68004 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:44:38.743440   68004 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e06100}
	I1009 18:44:38.743492   68004 network_create.go:124] attempt to create docker network ha-608611 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 18:44:38.743548   68004 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-608611 ha-608611
	I1009 18:44:38.802772   68004 network_create.go:108] docker network ha-608611 192.168.49.0/24 created
	I1009 18:44:38.802822   68004 kic.go:121] calculated static IP "192.168.49.2" for the "ha-608611" container
	I1009 18:44:38.802881   68004 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:44:38.820080   68004 cli_runner.go:164] Run: docker volume create ha-608611 --label name.minikube.sigs.k8s.io=ha-608611 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:44:38.840522   68004 oci.go:103] Successfully created a docker volume ha-608611
	I1009 18:44:38.840615   68004 cli_runner.go:164] Run: docker run --rm --name ha-608611-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-608611 --entrypoint /usr/bin/test -v ha-608611:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 18:44:39.244353   68004 oci.go:107] Successfully prepared a docker volume ha-608611
	I1009 18:44:39.244424   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:39.244433   68004 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 18:44:39.244478   68004 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-608611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 18:44:43.640122   68004 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-608611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.39557595s)
	I1009 18:44:43.640175   68004 kic.go:203] duration metric: took 4.395736393s to extract preloaded images to volume ...
	W1009 18:44:43.640303   68004 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 18:44:43.640358   68004 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 18:44:43.640405   68004 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:44:43.696295   68004 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-608611 --name ha-608611 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-608611 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-608611 --network ha-608611 --ip 192.168.49.2 --volume ha-608611:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 18:44:43.979679   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Running}}
	I1009 18:44:43.998229   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.017435   68004 cli_runner.go:164] Run: docker exec ha-608611 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:44:44.066674   68004 oci.go:144] the created container "ha-608611" has a running status.
	I1009 18:44:44.066704   68004 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa...
	I1009 18:44:44.380025   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 18:44:44.380087   68004 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:44:44.405345   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.425476   68004 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:44:44.425501   68004 kic_runner.go:114] Args: [docker exec --privileged ha-608611 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:44:44.469260   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.488635   68004 machine.go:93] provisionDockerMachine start ...
	I1009 18:44:44.488729   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.507225   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.507570   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.507596   68004 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:44:44.655038   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:44:44.655067   68004 ubuntu.go:182] provisioning hostname "ha-608611"
	I1009 18:44:44.655128   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.673982   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.674208   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.674222   68004 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-608611 && echo "ha-608611" | sudo tee /etc/hostname
	I1009 18:44:44.830321   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:44:44.830415   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.848252   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.848464   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.848481   68004 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-608611' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-608611/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-608611' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:44:44.995953   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:44:44.995980   68004 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 18:44:44.995996   68004 ubuntu.go:190] setting up certificates
	I1009 18:44:44.996004   68004 provision.go:84] configureAuth start
	I1009 18:44:44.996061   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.014319   68004 provision.go:143] copyHostCerts
	I1009 18:44:45.014359   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:44:45.014401   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 18:44:45.014411   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:44:45.014491   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 18:44:45.014585   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:44:45.014614   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 18:44:45.014624   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:44:45.014668   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 18:44:45.014744   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:44:45.014769   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 18:44:45.014773   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:44:45.014812   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 18:44:45.014890   68004 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.ha-608611 san=[127.0.0.1 192.168.49.2 ha-608611 localhost minikube]
	I1009 18:44:45.062086   68004 provision.go:177] copyRemoteCerts
	I1009 18:44:45.062191   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:44:45.062224   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.079568   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.182503   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 18:44:45.182590   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 18:44:45.201898   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 18:44:45.201952   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 18:44:45.219004   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 18:44:45.219061   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:44:45.236354   68004 provision.go:87] duration metric: took 240.321663ms to configureAuth
	I1009 18:44:45.236386   68004 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:44:45.236591   68004 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:44:45.236715   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.255084   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:45.255329   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:45.255352   68004 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:44:45.508555   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:44:45.508584   68004 machine.go:96] duration metric: took 1.01992839s to provisionDockerMachine
	I1009 18:44:45.508595   68004 client.go:171] duration metric: took 6.817674141s to LocalClient.Create
	I1009 18:44:45.508615   68004 start.go:167] duration metric: took 6.817737923s to libmachine.API.Create "ha-608611"
	I1009 18:44:45.508627   68004 start.go:293] postStartSetup for "ha-608611" (driver="docker")
	I1009 18:44:45.508641   68004 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:44:45.508698   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:44:45.508733   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.526223   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.630313   68004 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:44:45.633862   68004 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:44:45.633886   68004 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:44:45.633896   68004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 18:44:45.633937   68004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 18:44:45.634010   68004 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 18:44:45.634020   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /etc/ssl/certs/148802.pem
	I1009 18:44:45.634128   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:44:45.641735   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:44:45.661588   68004 start.go:296] duration metric: took 152.943683ms for postStartSetup
	I1009 18:44:45.661893   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.680048   68004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:44:45.680316   68004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:44:45.680352   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.696877   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.796243   68004 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:44:45.800700   68004 start.go:128] duration metric: took 7.112375109s to createHost
	I1009 18:44:45.800729   68004 start.go:83] releasing machines lock for "ha-608611", held for 7.112518345s
	I1009 18:44:45.800791   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.818595   68004 ssh_runner.go:195] Run: cat /version.json
	I1009 18:44:45.818630   68004 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:44:45.818641   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.818688   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.836603   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.836837   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.989177   68004 ssh_runner.go:195] Run: systemctl --version
	I1009 18:44:45.995896   68004 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:44:46.030619   68004 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:44:46.035429   68004 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:44:46.035494   68004 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:44:46.061922   68004 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 18:44:46.061944   68004 start.go:495] detecting cgroup driver to use...
	I1009 18:44:46.061975   68004 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:44:46.062026   68004 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:44:46.077423   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:44:46.089316   68004 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:44:46.089367   68004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:44:46.105696   68004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:44:46.122777   68004 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:44:46.202639   68004 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:44:46.294647   68004 docker.go:234] disabling docker service ...
	I1009 18:44:46.294704   68004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:44:46.312549   68004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:44:46.324800   68004 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:44:46.403433   68004 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:44:46.481222   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:44:46.493645   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:44:46.507931   68004 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:44:46.507979   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.518504   68004 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 18:44:46.518561   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.527328   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.535888   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.544437   68004 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:44:46.552112   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.560275   68004 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.573155   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.581642   68004 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:44:46.588485   68004 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:44:46.595486   68004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:44:46.674187   68004 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:44:46.778236   68004 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:44:46.778294   68004 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:44:46.782264   68004 start.go:563] Will wait 60s for crictl version
	I1009 18:44:46.782319   68004 ssh_runner.go:195] Run: which crictl
	I1009 18:44:46.785887   68004 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:44:46.809717   68004 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:44:46.809792   68004 ssh_runner.go:195] Run: crio --version
	I1009 18:44:46.837446   68004 ssh_runner.go:195] Run: crio --version
	I1009 18:44:46.867516   68004 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:44:46.869002   68004 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:44:46.886298   68004 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:44:46.890354   68004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:44:46.901206   68004 kubeadm.go:883] updating cluster {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:44:46.901331   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:46.901390   68004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:44:46.933183   68004 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:44:46.933203   68004 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:44:46.933255   68004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:44:46.959025   68004 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:44:46.959053   68004 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:44:46.959062   68004 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 18:44:46.959174   68004 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-608611 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:44:46.959248   68004 ssh_runner.go:195] Run: crio config
	I1009 18:44:47.005223   68004 cni.go:84] Creating CNI manager for ""
	I1009 18:44:47.005245   68004 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 18:44:47.005269   68004 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:44:47.005302   68004 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-608611 NodeName:ha-608611 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:44:47.005420   68004 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-608611"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:44:47.005441   68004 kube-vip.go:115] generating kube-vip config ...
	I1009 18:44:47.005483   68004 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 18:44:47.017646   68004 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:44:47.017751   68004 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1009 18:44:47.017813   68004 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:44:47.025763   68004 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:44:47.025815   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 18:44:47.033769   68004 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 18:44:47.046390   68004 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:44:47.062352   68004 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 18:44:47.075248   68004 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1009 18:44:47.090154   68004 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 18:44:47.093985   68004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:44:47.104234   68004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:44:47.185443   68004 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:44:47.207477   68004 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611 for IP: 192.168.49.2
	I1009 18:44:47.207503   68004 certs.go:195] generating shared ca certs ...
	I1009 18:44:47.207525   68004 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.207676   68004 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 18:44:47.207726   68004 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 18:44:47.207736   68004 certs.go:257] generating profile certs ...
	I1009 18:44:47.207784   68004 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key
	I1009 18:44:47.207802   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt with IP's: []
	I1009 18:44:47.296415   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt ...
	I1009 18:44:47.296444   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt: {Name:mka7495c49ff81b322387640c5f8be05bb8b97aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.296615   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key ...
	I1009 18:44:47.296627   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key: {Name:mk151a9783426d352762013576861912ee213cd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.296698   68004 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3
	I1009 18:44:47.296712   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1009 18:44:47.614912   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 ...
	I1009 18:44:47.614937   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3: {Name:mkf40b70da82ca6969886952002da4a653b30ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.615095   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3 ...
	I1009 18:44:47.615110   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3: {Name:mkd83b705c3cec74b71d7424d9484d8c52a44a8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.615192   68004 certs.go:382] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt
	I1009 18:44:47.615283   68004 certs.go:386] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3 -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key
	I1009 18:44:47.615388   68004 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key
	I1009 18:44:47.615408   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt with IP's: []
	I1009 18:44:47.855559   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt ...
	I1009 18:44:47.855590   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt: {Name:mkb45be1e91a0e10b00b60bd353288b3ec0a365b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.855750   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key ...
	I1009 18:44:47.855762   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key: {Name:mk173c05f4fc9659f1f76c6f2e2f3e956fd65bbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.855826   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 18:44:47.855839   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 18:44:47.855850   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 18:44:47.855863   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 18:44:47.855878   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 18:44:47.855890   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 18:44:47.855902   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 18:44:47.855914   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 18:44:47.855955   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 18:44:47.855989   68004 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 18:44:47.855998   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:44:47.856027   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 18:44:47.856050   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:44:47.856071   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 18:44:47.856108   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:44:47.856132   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:47.856159   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem -> /usr/share/ca-certificates/14880.pem
	I1009 18:44:47.856171   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /usr/share/ca-certificates/148802.pem
	I1009 18:44:47.856652   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:44:47.875170   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:44:47.892939   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:44:47.910593   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:44:47.927971   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 18:44:47.945367   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:44:47.962453   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:44:47.979768   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:44:47.996498   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:44:48.015667   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 18:44:48.032775   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 18:44:48.049777   68004 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:44:48.062232   68004 ssh_runner.go:195] Run: openssl version
	I1009 18:44:48.068333   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 18:44:48.076746   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.080306   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.080361   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.114497   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:44:48.123987   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:44:48.134109   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.138265   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.138325   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.173947   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:44:48.182505   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 18:44:48.190879   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.194449   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.194493   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.227813   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 18:44:48.236520   68004 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:44:48.239954   68004 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 18:44:48.240015   68004 kubeadm.go:400] StartCluster: {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:44:48.240093   68004 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:44:48.240133   68004 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:44:48.266457   68004 cri.go:89] found id: ""
	I1009 18:44:48.266520   68004 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:44:48.274981   68004 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:44:48.282927   68004 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:44:48.282975   68004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:44:48.290558   68004 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:44:48.290617   68004 kubeadm.go:157] found existing configuration files:
	
	I1009 18:44:48.290662   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:44:48.297883   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:44:48.297940   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:44:48.305298   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:44:48.312630   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:44:48.312685   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:44:48.320277   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:44:48.328028   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:44:48.328075   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:44:48.335714   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:44:48.343631   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:44:48.343682   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:44:48.351389   68004 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:44:48.409985   68004 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:44:48.468687   68004 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:48:52.176412   68004 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1009 18:48:52.176606   68004 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:48:52.179343   68004 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:48:52.179469   68004 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:48:52.179692   68004 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:48:52.179825   68004 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:48:52.179919   68004 kubeadm.go:318] OS: Linux
	I1009 18:48:52.180033   68004 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:48:52.180167   68004 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:48:52.180261   68004 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:48:52.180339   68004 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:48:52.180423   68004 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:48:52.180506   68004 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:48:52.180585   68004 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:48:52.180650   68004 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:48:52.180730   68004 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:48:52.180858   68004 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:48:52.181038   68004 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:48:52.181129   68004 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:48:52.183066   68004 out.go:252]   - Generating certificates and keys ...
	I1009 18:48:52.183199   68004 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:48:52.183278   68004 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:48:52.183337   68004 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 18:48:52.183388   68004 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 18:48:52.183456   68004 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 18:48:52.183531   68004 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 18:48:52.183609   68004 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 18:48:52.183734   68004 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:48:52.183814   68004 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 18:48:52.183946   68004 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:48:52.184022   68004 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 18:48:52.184077   68004 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 18:48:52.184120   68004 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 18:48:52.184209   68004 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:48:52.184289   68004 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:48:52.184373   68004 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:48:52.184446   68004 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:48:52.184545   68004 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:48:52.184650   68004 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:48:52.184751   68004 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:48:52.184845   68004 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:48:52.187212   68004 out.go:252]   - Booting up control plane ...
	I1009 18:48:52.187314   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:48:52.187403   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:48:52.187495   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:48:52.187618   68004 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:48:52.187764   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:48:52.187905   68004 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:48:52.188016   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:48:52.188092   68004 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:48:52.188271   68004 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:48:52.188367   68004 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:48:52.188438   68004 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001064091s
	I1009 18:48:52.188532   68004 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:48:52.188631   68004 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 18:48:52.188753   68004 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:48:52.188835   68004 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:48:52.188944   68004 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00065849s
	I1009 18:48:52.189053   68004 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000822023s
	I1009 18:48:52.189176   68004 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00103559s
	I1009 18:48:52.189186   68004 kubeadm.go:318] 
	I1009 18:48:52.189288   68004 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:48:52.189417   68004 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:48:52.189507   68004 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:48:52.189604   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:48:52.189710   68004 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:48:52.189827   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:48:52.189851   68004 kubeadm.go:318] 
	W1009 18:48:52.189997   68004 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001064091s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00065849s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000822023s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00103559s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 18:48:52.190074   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 18:48:54.957990   68004 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.767888592s)
	I1009 18:48:54.958062   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:48:54.971165   68004 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:48:54.971216   68004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:48:54.979630   68004 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:48:54.979649   68004 kubeadm.go:157] found existing configuration files:
	
	I1009 18:48:54.979696   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:48:54.987819   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:48:54.987884   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:48:54.995953   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:48:55.003976   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:48:55.004081   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:48:55.011851   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:48:55.019991   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:48:55.020043   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:48:55.027959   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:48:55.036070   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:48:55.036117   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:48:55.043823   68004 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:48:55.102132   68004 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:48:55.161990   68004 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:52:58.820119   68004 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 18:52:58.820247   68004 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:52:58.823463   68004 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:52:58.823551   68004 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:52:58.823686   68004 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:52:58.823770   68004 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:52:58.823834   68004 kubeadm.go:318] OS: Linux
	I1009 18:52:58.823882   68004 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:52:58.823967   68004 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:52:58.824039   68004 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:52:58.824112   68004 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:52:58.824209   68004 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:52:58.824278   68004 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:52:58.824339   68004 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:52:58.824385   68004 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:52:58.824446   68004 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:52:58.824525   68004 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:52:58.824621   68004 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:52:58.824718   68004 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:52:58.828177   68004 out.go:252]   - Generating certificates and keys ...
	I1009 18:52:58.828267   68004 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:52:58.828359   68004 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:52:58.828476   68004 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 18:52:58.828530   68004 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 18:52:58.828586   68004 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 18:52:58.828629   68004 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 18:52:58.828684   68004 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 18:52:58.828737   68004 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 18:52:58.828800   68004 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 18:52:58.828859   68004 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 18:52:58.828890   68004 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 18:52:58.828973   68004 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:52:58.829058   68004 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:52:58.829168   68004 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:52:58.829228   68004 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:52:58.829307   68004 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:52:58.829375   68004 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:52:58.829446   68004 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:52:58.829507   68004 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:52:58.830918   68004 out.go:252]   - Booting up control plane ...
	I1009 18:52:58.831004   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:52:58.831088   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:52:58.831162   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:52:58.831271   68004 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:52:58.831374   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:52:58.831475   68004 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:52:58.831547   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:52:58.831602   68004 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:52:58.831715   68004 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:52:58.831812   68004 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:52:58.831876   68004 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000946171s
	I1009 18:52:58.831960   68004 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:52:58.832028   68004 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 18:52:58.832113   68004 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:52:58.832207   68004 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:52:58.832277   68004 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	I1009 18:52:58.832347   68004 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	I1009 18:52:58.832422   68004 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	I1009 18:52:58.832428   68004 kubeadm.go:318] 
	I1009 18:52:58.832506   68004 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:52:58.832579   68004 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:52:58.832656   68004 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:52:58.832741   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:52:58.832805   68004 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:52:58.832888   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:52:58.832970   68004 kubeadm.go:402] duration metric: took 8m10.592960723s to StartCluster
	I1009 18:52:58.832981   68004 kubeadm.go:318] 
	I1009 18:52:58.833031   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:52:58.833085   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:52:58.861225   68004 cri.go:89] found id: ""
	I1009 18:52:58.861266   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.861281   68004 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:52:58.861287   68004 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:52:58.861341   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:52:58.888167   68004 cri.go:89] found id: ""
	I1009 18:52:58.888195   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.888205   68004 logs.go:284] No container was found matching "etcd"
	I1009 18:52:58.888212   68004 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:52:58.888287   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:52:58.914349   68004 cri.go:89] found id: ""
	I1009 18:52:58.914374   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.914384   68004 logs.go:284] No container was found matching "coredns"
	I1009 18:52:58.914390   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:52:58.914453   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:52:58.940856   68004 cri.go:89] found id: ""
	I1009 18:52:58.940884   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.940892   68004 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:52:58.940898   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:52:58.940949   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:52:58.967634   68004 cri.go:89] found id: ""
	I1009 18:52:58.967660   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.967668   68004 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:52:58.967675   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:52:58.967737   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:52:58.994857   68004 cri.go:89] found id: ""
	I1009 18:52:58.994884   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.994892   68004 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:52:58.994897   68004 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:52:58.994951   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:52:59.022250   68004 cri.go:89] found id: ""
	I1009 18:52:59.022280   68004 logs.go:282] 0 containers: []
	W1009 18:52:59.022296   68004 logs.go:284] No container was found matching "kindnet"
	I1009 18:52:59.022305   68004 logs.go:123] Gathering logs for container status ...
	I1009 18:52:59.022316   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:52:59.050362   68004 logs.go:123] Gathering logs for kubelet ...
	I1009 18:52:59.050466   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:52:59.114521   68004 logs.go:123] Gathering logs for dmesg ...
	I1009 18:52:59.114560   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:52:59.126721   68004 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:52:59.126746   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:52:59.184497   68004 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:52:59.177217    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.177807    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179451    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179888    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.181458    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:52:59.177217    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.177807    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179451    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179888    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.181458    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:52:59.184526   68004 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:52:59.184536   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1009 18:52:59.243650   68004 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 18:52:59.243716   68004 out.go:285] * 
	W1009 18:52:59.243784   68004 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:52:59.243799   68004 out.go:285] * 
	W1009 18:52:59.245479   68004 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:52:59.249165   68004 out.go:203] 
	W1009 18:52:59.250590   68004 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:52:59.250620   68004 out.go:285] * 
	I1009 18:52:59.252112   68004 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.468168675Z" level=info msg="createCtr: removing container 5b0ee951a8d30dc41cbe0e80f8fd65534c65d3a6b97e8d5542e2681b411dba7d" id=8917b73c-c4e6-4e87-8d87-409c0fa122c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.468207713Z" level=info msg="createCtr: deleting container 5b0ee951a8d30dc41cbe0e80f8fd65534c65d3a6b97e8d5542e2681b411dba7d from storage" id=8917b73c-c4e6-4e87-8d87-409c0fa122c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.470223387Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-608611_kube-system_cc9d45d79042caf53449ab6317965aad_0" id=8917b73c-c4e6-4e87-8d87-409c0fa122c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.441918254Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=5f830916-7502-45c7-a992-b1afe6a4ec2f name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.442961662Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=ce442719-daad-4875-88bf-1eae8be1d0eb name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.443900487Z" level=info msg="Creating container: kube-system/etcd-ha-608611/etcd" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.444174088Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.448745276Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.449318807Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.46398444Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.465375584Z" level=info msg="createCtr: deleting container ID 83743aebcddc36aef5c02af3dcd233f5d07925ba9d0281ad1316ac7a648aa44c from idIndex" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.465420508Z" level=info msg="createCtr: removing container 83743aebcddc36aef5c02af3dcd233f5d07925ba9d0281ad1316ac7a648aa44c" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.465459824Z" level=info msg="createCtr: deleting container 83743aebcddc36aef5c02af3dcd233f5d07925ba9d0281ad1316ac7a648aa44c from storage" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.467757138Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-608611_kube-system_b479c8e1034fd1754049af8325a8c50b_0" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.441485805Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=30142e19-bbd7-4eb1-b9bc-3f7fd8b15d13 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.442431482Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=bc06eb87-f8e1-4752-90ce-f306d71bb12c name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.443389229Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-608611/kube-apiserver" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.443682696Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.446968447Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.447385153Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.460272538Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.461764017Z" level=info msg="createCtr: deleting container ID c4531b33398cdc11b3df5f5c569221cb658215b7f587bf4d85d9449bd3ddd90e from idIndex" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.461810281Z" level=info msg="createCtr: removing container c4531b33398cdc11b3df5f5c569221cb658215b7f587bf4d85d9449bd3ddd90e" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.461842736Z" level=info msg="createCtr: deleting container c4531b33398cdc11b3df5f5c569221cb658215b7f587bf4d85d9449bd3ddd90e from storage" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.464060722Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-608611_kube-system_8c1c5aee1432fcfd0e6519753fb0d668_0" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:54:46.325147    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:46.325711    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:46.327313    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:46.327773    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:46.329342    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:54:46 up  1:37,  0 user,  load average: 0.08, 0.07, 0.08
	Linux ha-608611 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 18:54:40 ha-608611 kubelet[1930]: E1009 18:54:40.470513    1930 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:54:40 ha-608611 kubelet[1930]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:40 ha-608611 kubelet[1930]:  > podSandboxID="2ef2b90afa617b399f6036f17dc5f1152d378da5043adff2fc3afde192bc8693"
	Oct 09 18:54:40 ha-608611 kubelet[1930]: E1009 18:54:40.470610    1930 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:54:40 ha-608611 kubelet[1930]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-608611_kube-system(cc9d45d79042caf53449ab6317965aad): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:40 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:54:40 ha-608611 kubelet[1930]: E1009 18:54:40.470638    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-608611" podUID="cc9d45d79042caf53449ab6317965aad"
	Oct 09 18:54:41 ha-608611 kubelet[1930]: E1009 18:54:41.441458    1930 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 18:54:41 ha-608611 kubelet[1930]: E1009 18:54:41.468106    1930 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:54:41 ha-608611 kubelet[1930]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:41 ha-608611 kubelet[1930]:  > podSandboxID="85e631b34b7cd8e30736ecbe7d81581bf5cedb0c5abd8815458e28a54592f51e"
	Oct 09 18:54:41 ha-608611 kubelet[1930]: E1009 18:54:41.468242    1930 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:54:41 ha-608611 kubelet[1930]:         container etcd start failed in pod etcd-ha-608611_kube-system(b479c8e1034fd1754049af8325a8c50b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:41 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:54:41 ha-608611 kubelet[1930]: E1009 18:54:41.468280    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-608611" podUID="b479c8e1034fd1754049af8325a8c50b"
	Oct 09 18:54:45 ha-608611 kubelet[1930]: E1009 18:54:45.440984    1930 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 18:54:45 ha-608611 kubelet[1930]: E1009 18:54:45.464410    1930 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:54:45 ha-608611 kubelet[1930]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:45 ha-608611 kubelet[1930]:  > podSandboxID="3ed86e3854bad44d01adb07f49466fff61fdf9dd10f223587d539b2547828b70"
	Oct 09 18:54:45 ha-608611 kubelet[1930]: E1009 18:54:45.464511    1930 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:54:45 ha-608611 kubelet[1930]:         container kube-apiserver start failed in pod kube-apiserver-ha-608611_kube-system(8c1c5aee1432fcfd0e6519753fb0d668): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:45 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:54:45 ha-608611 kubelet[1930]: E1009 18:54:45.464543    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-608611" podUID="8c1c5aee1432fcfd0e6519753fb0d668"
	Oct 09 18:54:46 ha-608611 kubelet[1930]: E1009 18:54:46.045748    1930 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 09 18:54:46 ha-608611 kubelet[1930]: E1009 18:54:46.152695    1930 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-608611.186ce72dd5388d27  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-608611,UID:ha-608611,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-608611 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-608611,},FirstTimestamp:2025-10-09 18:48:58.431819047 +0000 UTC m=+0.618197321,LastTimestamp:2025-10-09 18:48:58.431819047 +0000 UTC m=+0.618197321,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-608611,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611: exit status 6 (288.239272ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:54:46.689578   76311 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-608611" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (1.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-608611" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-608611\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-608611\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nf
sshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-608611\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonIm
ages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-608611" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-608611\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-608611\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-608611\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\
"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --
output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-608611
helpers_test.go:243: (dbg) docker inspect ha-608611:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	        "Created": "2025-10-09T18:44:43.71277862Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 68571,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:44:43.760299717Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hostname",
	        "HostsPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hosts",
	        "LogPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c-json.log",
	        "Name": "/ha-608611",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-608611:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-608611",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	                "LowerDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-608611",
	                "Source": "/var/lib/docker/volumes/ha-608611/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-608611",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-608611",
	                "name.minikube.sigs.k8s.io": "ha-608611",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4f6557069285c9379d4788b404b85a7f7332b0f0915fb426eb2d3ffb6f02df65",
	            "SandboxKey": "/var/run/docker/netns/4f6557069285",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-608611": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:dc:55:21:78:3f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d41ad8abecfe5e57fea462a2d7f6665aa3879de8bfc3fe0269f712186c14e257",
	                    "EndpointID": "322add21e309d24bef79b6b7f428ea8a1994c3d46e02d36bb4debf9950e6c0a5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-608611",
	                        "92fc23109156"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611: exit status 6 (288.471008ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:54:47.307031   76560 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterClusterStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterClusterStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image          │ functional-753440 image ls                                                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ delete         │ -p functional-753440                                                                                            │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:44 UTC │ 09 Oct 25 18:44 UTC │
	│ start          │ ha-608611 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:44 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- rollout status deployment/busybox                                                          │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node           │ ha-608611 node add --alsologtostderr -v 5                                                                       │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:44:38
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:44:38.499708   68004 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:44:38.499979   68004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:44:38.499990   68004 out.go:374] Setting ErrFile to fd 2...
	I1009 18:44:38.499995   68004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:44:38.500193   68004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:44:38.500672   68004 out.go:368] Setting JSON to false
	I1009 18:44:38.501534   68004 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5226,"bootTime":1760030252,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:44:38.501651   68004 start.go:141] virtualization: kvm guest
	I1009 18:44:38.503753   68004 out.go:179] * [ha-608611] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:44:38.505161   68004 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:44:38.505174   68004 notify.go:220] Checking for updates...
	I1009 18:44:38.507971   68004 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:44:38.509361   68004 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:44:38.510823   68004 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:44:38.512241   68004 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:44:38.513815   68004 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:44:38.515465   68004 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:44:38.539241   68004 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:44:38.539344   68004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:44:38.597491   68004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:44:38.585969456 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:44:38.597607   68004 docker.go:318] overlay module found
	I1009 18:44:38.599712   68004 out.go:179] * Using the docker driver based on user configuration
	I1009 18:44:38.601190   68004 start.go:305] selected driver: docker
	I1009 18:44:38.601208   68004 start.go:925] validating driver "docker" against <nil>
	I1009 18:44:38.601220   68004 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:44:38.601773   68004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:44:38.656624   68004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:44:38.646723999 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:44:38.656772   68004 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 18:44:38.656973   68004 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:44:38.658777   68004 out.go:179] * Using Docker driver with root privileges
	I1009 18:44:38.660475   68004 cni.go:84] Creating CNI manager for ""
	I1009 18:44:38.660538   68004 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 18:44:38.660548   68004 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:44:38.660625   68004 start.go:349] cluster config:
	{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1009 18:44:38.662228   68004 out.go:179] * Starting "ha-608611" primary control-plane node in "ha-608611" cluster
	I1009 18:44:38.663758   68004 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:44:38.665163   68004 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:44:38.666518   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:38.666553   68004 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:44:38.666561   68004 cache.go:64] Caching tarball of preloaded images
	I1009 18:44:38.666652   68004 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:44:38.666665   68004 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:44:38.666636   68004 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:44:38.667052   68004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:44:38.667080   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json: {Name:mk7eb36c0f629760ce25ed6ea0be36fe97501d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:38.687956   68004 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 18:44:38.687977   68004 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 18:44:38.687999   68004 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:44:38.688029   68004 start.go:360] acquireMachinesLock for ha-608611: {Name:mk7579977ab708dc80cadd5f1683dbd9d0a08d4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:44:38.688196   68004 start.go:364] duration metric: took 118.358µs to acquireMachinesLock for "ha-608611"
	I1009 18:44:38.688228   68004 start.go:93] Provisioning new machine with config: &{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:44:38.688308   68004 start.go:125] createHost starting for "" (driver="docker")
	I1009 18:44:38.690596   68004 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 18:44:38.690877   68004 start.go:159] libmachine.API.Create for "ha-608611" (driver="docker")
	I1009 18:44:38.690915   68004 client.go:168] LocalClient.Create starting
	I1009 18:44:38.691016   68004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem
	I1009 18:44:38.691065   68004 main.go:141] libmachine: Decoding PEM data...
	I1009 18:44:38.691090   68004 main.go:141] libmachine: Parsing certificate...
	I1009 18:44:38.691160   68004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem
	I1009 18:44:38.691207   68004 main.go:141] libmachine: Decoding PEM data...
	I1009 18:44:38.691219   68004 main.go:141] libmachine: Parsing certificate...
	I1009 18:44:38.691649   68004 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:44:38.708961   68004 cli_runner.go:211] docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:44:38.709049   68004 network_create.go:284] running [docker network inspect ha-608611] to gather additional debugging logs...
	I1009 18:44:38.709068   68004 cli_runner.go:164] Run: docker network inspect ha-608611
	W1009 18:44:38.724919   68004 cli_runner.go:211] docker network inspect ha-608611 returned with exit code 1
	I1009 18:44:38.724948   68004 network_create.go:287] error running [docker network inspect ha-608611]: docker network inspect ha-608611: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-608611 not found
	I1009 18:44:38.724959   68004 network_create.go:289] output of [docker network inspect ha-608611]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-608611 not found
	
	** /stderr **
	I1009 18:44:38.725077   68004 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:44:38.743440   68004 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e06100}
	I1009 18:44:38.743492   68004 network_create.go:124] attempt to create docker network ha-608611 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 18:44:38.743548   68004 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-608611 ha-608611
	I1009 18:44:38.802772   68004 network_create.go:108] docker network ha-608611 192.168.49.0/24 created
	I1009 18:44:38.802822   68004 kic.go:121] calculated static IP "192.168.49.2" for the "ha-608611" container
	I1009 18:44:38.802881   68004 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:44:38.820080   68004 cli_runner.go:164] Run: docker volume create ha-608611 --label name.minikube.sigs.k8s.io=ha-608611 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:44:38.840522   68004 oci.go:103] Successfully created a docker volume ha-608611
	I1009 18:44:38.840615   68004 cli_runner.go:164] Run: docker run --rm --name ha-608611-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-608611 --entrypoint /usr/bin/test -v ha-608611:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 18:44:39.244353   68004 oci.go:107] Successfully prepared a docker volume ha-608611
	I1009 18:44:39.244424   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:39.244433   68004 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 18:44:39.244478   68004 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-608611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 18:44:43.640122   68004 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-608611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.39557595s)
	I1009 18:44:43.640175   68004 kic.go:203] duration metric: took 4.395736393s to extract preloaded images to volume ...
	W1009 18:44:43.640303   68004 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 18:44:43.640358   68004 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 18:44:43.640405   68004 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:44:43.696295   68004 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-608611 --name ha-608611 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-608611 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-608611 --network ha-608611 --ip 192.168.49.2 --volume ha-608611:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 18:44:43.979679   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Running}}
	I1009 18:44:43.998229   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.017435   68004 cli_runner.go:164] Run: docker exec ha-608611 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:44:44.066674   68004 oci.go:144] the created container "ha-608611" has a running status.
	I1009 18:44:44.066704   68004 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa...
	I1009 18:44:44.380025   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 18:44:44.380087   68004 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:44:44.405345   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.425476   68004 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:44:44.425501   68004 kic_runner.go:114] Args: [docker exec --privileged ha-608611 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:44:44.469260   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.488635   68004 machine.go:93] provisionDockerMachine start ...
	I1009 18:44:44.488729   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.507225   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.507570   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.507596   68004 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:44:44.655038   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:44:44.655067   68004 ubuntu.go:182] provisioning hostname "ha-608611"
	I1009 18:44:44.655128   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.673982   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.674208   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.674222   68004 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-608611 && echo "ha-608611" | sudo tee /etc/hostname
	I1009 18:44:44.830321   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:44:44.830415   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.848252   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.848464   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.848481   68004 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-608611' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-608611/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-608611' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:44:44.995953   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:44:44.995980   68004 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 18:44:44.995996   68004 ubuntu.go:190] setting up certificates
	I1009 18:44:44.996004   68004 provision.go:84] configureAuth start
	I1009 18:44:44.996061   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.014319   68004 provision.go:143] copyHostCerts
	I1009 18:44:45.014359   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:44:45.014401   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 18:44:45.014411   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:44:45.014491   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 18:44:45.014585   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:44:45.014614   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 18:44:45.014624   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:44:45.014668   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 18:44:45.014744   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:44:45.014769   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 18:44:45.014773   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:44:45.014812   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 18:44:45.014890   68004 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.ha-608611 san=[127.0.0.1 192.168.49.2 ha-608611 localhost minikube]
	I1009 18:44:45.062086   68004 provision.go:177] copyRemoteCerts
	I1009 18:44:45.062191   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:44:45.062224   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.079568   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.182503   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 18:44:45.182590   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 18:44:45.201898   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 18:44:45.201952   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 18:44:45.219004   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 18:44:45.219061   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:44:45.236354   68004 provision.go:87] duration metric: took 240.321663ms to configureAuth
	I1009 18:44:45.236386   68004 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:44:45.236591   68004 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:44:45.236715   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.255084   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:45.255329   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:45.255352   68004 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:44:45.508555   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:44:45.508584   68004 machine.go:96] duration metric: took 1.01992839s to provisionDockerMachine
	I1009 18:44:45.508595   68004 client.go:171] duration metric: took 6.817674141s to LocalClient.Create
	I1009 18:44:45.508615   68004 start.go:167] duration metric: took 6.817737923s to libmachine.API.Create "ha-608611"
	I1009 18:44:45.508627   68004 start.go:293] postStartSetup for "ha-608611" (driver="docker")
	I1009 18:44:45.508641   68004 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:44:45.508698   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:44:45.508733   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.526223   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.630313   68004 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:44:45.633862   68004 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:44:45.633886   68004 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:44:45.633896   68004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 18:44:45.633937   68004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 18:44:45.634010   68004 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 18:44:45.634020   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /etc/ssl/certs/148802.pem
	I1009 18:44:45.634128   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:44:45.641735   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:44:45.661588   68004 start.go:296] duration metric: took 152.943683ms for postStartSetup
	I1009 18:44:45.661893   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.680048   68004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:44:45.680316   68004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:44:45.680352   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.696877   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.796243   68004 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:44:45.800700   68004 start.go:128] duration metric: took 7.112375109s to createHost
	I1009 18:44:45.800729   68004 start.go:83] releasing machines lock for "ha-608611", held for 7.112518345s
	I1009 18:44:45.800791   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.818595   68004 ssh_runner.go:195] Run: cat /version.json
	I1009 18:44:45.818630   68004 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:44:45.818641   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.818688   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.836603   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.836837   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.989177   68004 ssh_runner.go:195] Run: systemctl --version
	I1009 18:44:45.995896   68004 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:44:46.030619   68004 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:44:46.035429   68004 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:44:46.035494   68004 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:44:46.061922   68004 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 18:44:46.061944   68004 start.go:495] detecting cgroup driver to use...
	I1009 18:44:46.061975   68004 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:44:46.062026   68004 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:44:46.077423   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:44:46.089316   68004 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:44:46.089367   68004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:44:46.105696   68004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:44:46.122777   68004 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:44:46.202639   68004 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:44:46.294647   68004 docker.go:234] disabling docker service ...
	I1009 18:44:46.294704   68004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:44:46.312549   68004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:44:46.324800   68004 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:44:46.403433   68004 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:44:46.481222   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:44:46.493645   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:44:46.507931   68004 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:44:46.507979   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.518504   68004 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 18:44:46.518561   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.527328   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.535888   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.544437   68004 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:44:46.552112   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.560275   68004 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.573155   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.581642   68004 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:44:46.588485   68004 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:44:46.595486   68004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:44:46.674187   68004 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:44:46.778236   68004 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:44:46.778294   68004 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:44:46.782264   68004 start.go:563] Will wait 60s for crictl version
	I1009 18:44:46.782319   68004 ssh_runner.go:195] Run: which crictl
	I1009 18:44:46.785887   68004 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:44:46.809717   68004 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:44:46.809792   68004 ssh_runner.go:195] Run: crio --version
	I1009 18:44:46.837446   68004 ssh_runner.go:195] Run: crio --version
	I1009 18:44:46.867516   68004 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:44:46.869002   68004 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:44:46.886298   68004 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:44:46.890354   68004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:44:46.901206   68004 kubeadm.go:883] updating cluster {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:44:46.901331   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:46.901390   68004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:44:46.933183   68004 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:44:46.933203   68004 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:44:46.933255   68004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:44:46.959025   68004 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:44:46.959053   68004 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:44:46.959062   68004 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 18:44:46.959174   68004 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-608611 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:44:46.959248   68004 ssh_runner.go:195] Run: crio config
	I1009 18:44:47.005223   68004 cni.go:84] Creating CNI manager for ""
	I1009 18:44:47.005245   68004 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 18:44:47.005269   68004 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:44:47.005302   68004 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-608611 NodeName:ha-608611 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:44:47.005420   68004 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-608611"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:44:47.005441   68004 kube-vip.go:115] generating kube-vip config ...
	I1009 18:44:47.005483   68004 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 18:44:47.017646   68004 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:44:47.017751   68004 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1009 18:44:47.017813   68004 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:44:47.025763   68004 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:44:47.025815   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 18:44:47.033769   68004 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 18:44:47.046390   68004 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:44:47.062352   68004 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 18:44:47.075248   68004 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1009 18:44:47.090154   68004 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 18:44:47.093985   68004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:44:47.104234   68004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:44:47.185443   68004 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:44:47.207477   68004 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611 for IP: 192.168.49.2
	I1009 18:44:47.207503   68004 certs.go:195] generating shared ca certs ...
	I1009 18:44:47.207525   68004 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.207676   68004 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 18:44:47.207726   68004 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 18:44:47.207736   68004 certs.go:257] generating profile certs ...
	I1009 18:44:47.207784   68004 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key
	I1009 18:44:47.207802   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt with IP's: []
	I1009 18:44:47.296415   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt ...
	I1009 18:44:47.296444   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt: {Name:mka7495c49ff81b322387640c5f8be05bb8b97aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.296615   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key ...
	I1009 18:44:47.296627   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key: {Name:mk151a9783426d352762013576861912ee213cd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.296698   68004 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3
	I1009 18:44:47.296712   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1009 18:44:47.614912   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 ...
	I1009 18:44:47.614937   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3: {Name:mkf40b70da82ca6969886952002da4a653b30ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.615095   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3 ...
	I1009 18:44:47.615110   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3: {Name:mkd83b705c3cec74b71d7424d9484d8c52a44a8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.615192   68004 certs.go:382] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt
	I1009 18:44:47.615283   68004 certs.go:386] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3 -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key
	I1009 18:44:47.615388   68004 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key
	I1009 18:44:47.615408   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt with IP's: []
	I1009 18:44:47.855559   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt ...
	I1009 18:44:47.855590   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt: {Name:mkb45be1e91a0e10b00b60bd353288b3ec0a365b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.855750   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key ...
	I1009 18:44:47.855762   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key: {Name:mk173c05f4fc9659f1f76c6f2e2f3e956fd65bbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.855826   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 18:44:47.855839   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 18:44:47.855850   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 18:44:47.855863   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 18:44:47.855878   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 18:44:47.855890   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 18:44:47.855902   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 18:44:47.855914   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 18:44:47.855955   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 18:44:47.855989   68004 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 18:44:47.855998   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:44:47.856027   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 18:44:47.856050   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:44:47.856071   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 18:44:47.856108   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:44:47.856132   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:47.856159   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem -> /usr/share/ca-certificates/14880.pem
	I1009 18:44:47.856171   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /usr/share/ca-certificates/148802.pem
	I1009 18:44:47.856652   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:44:47.875170   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:44:47.892939   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:44:47.910593   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:44:47.927971   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 18:44:47.945367   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:44:47.962453   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:44:47.979768   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:44:47.996498   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:44:48.015667   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 18:44:48.032775   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 18:44:48.049777   68004 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:44:48.062232   68004 ssh_runner.go:195] Run: openssl version
	I1009 18:44:48.068333   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 18:44:48.076746   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.080306   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.080361   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.114497   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:44:48.123987   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:44:48.134109   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.138265   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.138325   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.173947   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:44:48.182505   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 18:44:48.190879   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.194449   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.194493   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.227813   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 18:44:48.236520   68004 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:44:48.239954   68004 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 18:44:48.240015   68004 kubeadm.go:400] StartCluster: {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:44:48.240093   68004 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:44:48.240133   68004 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:44:48.266457   68004 cri.go:89] found id: ""
	I1009 18:44:48.266520   68004 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:44:48.274981   68004 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:44:48.282927   68004 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:44:48.282975   68004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:44:48.290558   68004 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:44:48.290617   68004 kubeadm.go:157] found existing configuration files:
	
	I1009 18:44:48.290662   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:44:48.297883   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:44:48.297940   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:44:48.305298   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:44:48.312630   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:44:48.312685   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:44:48.320277   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:44:48.328028   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:44:48.328075   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:44:48.335714   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:44:48.343631   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:44:48.343682   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:44:48.351389   68004 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:44:48.409985   68004 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:44:48.468687   68004 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:48:52.176412   68004 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1009 18:48:52.176606   68004 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:48:52.179343   68004 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:48:52.179469   68004 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:48:52.179692   68004 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:48:52.179825   68004 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:48:52.179919   68004 kubeadm.go:318] OS: Linux
	I1009 18:48:52.180033   68004 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:48:52.180167   68004 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:48:52.180261   68004 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:48:52.180339   68004 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:48:52.180423   68004 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:48:52.180506   68004 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:48:52.180585   68004 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:48:52.180650   68004 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:48:52.180730   68004 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:48:52.180858   68004 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:48:52.181038   68004 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:48:52.181129   68004 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:48:52.183066   68004 out.go:252]   - Generating certificates and keys ...
	I1009 18:48:52.183199   68004 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:48:52.183278   68004 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:48:52.183337   68004 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 18:48:52.183388   68004 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 18:48:52.183456   68004 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 18:48:52.183531   68004 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 18:48:52.183609   68004 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 18:48:52.183734   68004 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:48:52.183814   68004 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 18:48:52.183946   68004 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:48:52.184022   68004 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 18:48:52.184077   68004 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 18:48:52.184120   68004 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 18:48:52.184209   68004 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:48:52.184289   68004 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:48:52.184373   68004 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:48:52.184446   68004 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:48:52.184545   68004 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:48:52.184650   68004 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:48:52.184751   68004 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:48:52.184845   68004 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:48:52.187212   68004 out.go:252]   - Booting up control plane ...
	I1009 18:48:52.187314   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:48:52.187403   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:48:52.187495   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:48:52.187618   68004 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:48:52.187764   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:48:52.187905   68004 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:48:52.188016   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:48:52.188092   68004 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:48:52.188271   68004 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:48:52.188367   68004 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:48:52.188438   68004 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001064091s
	I1009 18:48:52.188532   68004 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:48:52.188631   68004 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 18:48:52.188753   68004 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:48:52.188835   68004 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:48:52.188944   68004 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00065849s
	I1009 18:48:52.189053   68004 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000822023s
	I1009 18:48:52.189176   68004 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00103559s
	I1009 18:48:52.189186   68004 kubeadm.go:318] 
	I1009 18:48:52.189288   68004 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:48:52.189417   68004 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:48:52.189507   68004 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:48:52.189604   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:48:52.189710   68004 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:48:52.189827   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:48:52.189851   68004 kubeadm.go:318] 
	W1009 18:48:52.189997   68004 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001064091s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00065849s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000822023s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00103559s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 18:48:52.190074   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 18:48:54.957990   68004 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.767888592s)
	I1009 18:48:54.958062   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:48:54.971165   68004 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:48:54.971216   68004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:48:54.979630   68004 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:48:54.979649   68004 kubeadm.go:157] found existing configuration files:
	
	I1009 18:48:54.979696   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:48:54.987819   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:48:54.987884   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:48:54.995953   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:48:55.003976   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:48:55.004081   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:48:55.011851   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:48:55.019991   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:48:55.020043   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:48:55.027959   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:48:55.036070   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:48:55.036117   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:48:55.043823   68004 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:48:55.102132   68004 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:48:55.161990   68004 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:52:58.820119   68004 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 18:52:58.820247   68004 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:52:58.823463   68004 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:52:58.823551   68004 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:52:58.823686   68004 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:52:58.823770   68004 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:52:58.823834   68004 kubeadm.go:318] OS: Linux
	I1009 18:52:58.823882   68004 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:52:58.823967   68004 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:52:58.824039   68004 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:52:58.824112   68004 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:52:58.824209   68004 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:52:58.824278   68004 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:52:58.824339   68004 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:52:58.824385   68004 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:52:58.824446   68004 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:52:58.824525   68004 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:52:58.824621   68004 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:52:58.824718   68004 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:52:58.828177   68004 out.go:252]   - Generating certificates and keys ...
	I1009 18:52:58.828267   68004 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:52:58.828359   68004 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:52:58.828476   68004 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 18:52:58.828530   68004 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 18:52:58.828586   68004 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 18:52:58.828629   68004 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 18:52:58.828684   68004 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 18:52:58.828737   68004 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 18:52:58.828800   68004 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 18:52:58.828859   68004 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 18:52:58.828890   68004 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 18:52:58.828973   68004 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:52:58.829058   68004 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:52:58.829168   68004 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:52:58.829228   68004 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:52:58.829307   68004 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:52:58.829375   68004 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:52:58.829446   68004 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:52:58.829507   68004 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:52:58.830918   68004 out.go:252]   - Booting up control plane ...
	I1009 18:52:58.831004   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:52:58.831088   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:52:58.831162   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:52:58.831271   68004 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:52:58.831374   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:52:58.831475   68004 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:52:58.831547   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:52:58.831602   68004 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:52:58.831715   68004 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:52:58.831812   68004 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:52:58.831876   68004 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000946171s
	I1009 18:52:58.831960   68004 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:52:58.832028   68004 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 18:52:58.832113   68004 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:52:58.832207   68004 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:52:58.832277   68004 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	I1009 18:52:58.832347   68004 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	I1009 18:52:58.832422   68004 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	I1009 18:52:58.832428   68004 kubeadm.go:318] 
	I1009 18:52:58.832506   68004 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:52:58.832579   68004 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:52:58.832656   68004 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:52:58.832741   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:52:58.832805   68004 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:52:58.832888   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:52:58.832970   68004 kubeadm.go:402] duration metric: took 8m10.592960723s to StartCluster
	I1009 18:52:58.832981   68004 kubeadm.go:318] 
	I1009 18:52:58.833031   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:52:58.833085   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:52:58.861225   68004 cri.go:89] found id: ""
	I1009 18:52:58.861266   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.861281   68004 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:52:58.861287   68004 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:52:58.861341   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:52:58.888167   68004 cri.go:89] found id: ""
	I1009 18:52:58.888195   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.888205   68004 logs.go:284] No container was found matching "etcd"
	I1009 18:52:58.888212   68004 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:52:58.888287   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:52:58.914349   68004 cri.go:89] found id: ""
	I1009 18:52:58.914374   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.914384   68004 logs.go:284] No container was found matching "coredns"
	I1009 18:52:58.914390   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:52:58.914453   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:52:58.940856   68004 cri.go:89] found id: ""
	I1009 18:52:58.940884   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.940892   68004 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:52:58.940898   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:52:58.940949   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:52:58.967634   68004 cri.go:89] found id: ""
	I1009 18:52:58.967660   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.967668   68004 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:52:58.967675   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:52:58.967737   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:52:58.994857   68004 cri.go:89] found id: ""
	I1009 18:52:58.994884   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.994892   68004 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:52:58.994897   68004 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:52:58.994951   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:52:59.022250   68004 cri.go:89] found id: ""
	I1009 18:52:59.022280   68004 logs.go:282] 0 containers: []
	W1009 18:52:59.022296   68004 logs.go:284] No container was found matching "kindnet"
	I1009 18:52:59.022305   68004 logs.go:123] Gathering logs for container status ...
	I1009 18:52:59.022316   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:52:59.050362   68004 logs.go:123] Gathering logs for kubelet ...
	I1009 18:52:59.050466   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:52:59.114521   68004 logs.go:123] Gathering logs for dmesg ...
	I1009 18:52:59.114560   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:52:59.126721   68004 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:52:59.126746   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:52:59.184497   68004 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:52:59.177217    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.177807    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179451    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179888    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.181458    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:52:59.177217    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.177807    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179451    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179888    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.181458    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:52:59.184526   68004 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:52:59.184536   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1009 18:52:59.243650   68004 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 18:52:59.243716   68004 out.go:285] * 
	W1009 18:52:59.243784   68004 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:52:59.243799   68004 out.go:285] * 
	W1009 18:52:59.245479   68004 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:52:59.249165   68004 out.go:203] 
	W1009 18:52:59.250590   68004 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:52:59.250620   68004 out.go:285] * 
	I1009 18:52:59.252112   68004 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.468168675Z" level=info msg="createCtr: removing container 5b0ee951a8d30dc41cbe0e80f8fd65534c65d3a6b97e8d5542e2681b411dba7d" id=8917b73c-c4e6-4e87-8d87-409c0fa122c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.468207713Z" level=info msg="createCtr: deleting container 5b0ee951a8d30dc41cbe0e80f8fd65534c65d3a6b97e8d5542e2681b411dba7d from storage" id=8917b73c-c4e6-4e87-8d87-409c0fa122c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.470223387Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-608611_kube-system_cc9d45d79042caf53449ab6317965aad_0" id=8917b73c-c4e6-4e87-8d87-409c0fa122c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.441918254Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=5f830916-7502-45c7-a992-b1afe6a4ec2f name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.442961662Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=ce442719-daad-4875-88bf-1eae8be1d0eb name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.443900487Z" level=info msg="Creating container: kube-system/etcd-ha-608611/etcd" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.444174088Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.448745276Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.449318807Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.46398444Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.465375584Z" level=info msg="createCtr: deleting container ID 83743aebcddc36aef5c02af3dcd233f5d07925ba9d0281ad1316ac7a648aa44c from idIndex" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.465420508Z" level=info msg="createCtr: removing container 83743aebcddc36aef5c02af3dcd233f5d07925ba9d0281ad1316ac7a648aa44c" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.465459824Z" level=info msg="createCtr: deleting container 83743aebcddc36aef5c02af3dcd233f5d07925ba9d0281ad1316ac7a648aa44c from storage" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.467757138Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-608611_kube-system_b479c8e1034fd1754049af8325a8c50b_0" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.441485805Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=30142e19-bbd7-4eb1-b9bc-3f7fd8b15d13 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.442431482Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=bc06eb87-f8e1-4752-90ce-f306d71bb12c name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.443389229Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-608611/kube-apiserver" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.443682696Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.446968447Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.447385153Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.460272538Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.461764017Z" level=info msg="createCtr: deleting container ID c4531b33398cdc11b3df5f5c569221cb658215b7f587bf4d85d9449bd3ddd90e from idIndex" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.461810281Z" level=info msg="createCtr: removing container c4531b33398cdc11b3df5f5c569221cb658215b7f587bf4d85d9449bd3ddd90e" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.461842736Z" level=info msg="createCtr: deleting container c4531b33398cdc11b3df5f5c569221cb658215b7f587bf4d85d9449bd3ddd90e from storage" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.464060722Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-608611_kube-system_8c1c5aee1432fcfd0e6519753fb0d668_0" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:54:47.872364    3689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:47.872856    3689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:47.874521    3689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:47.874978    3689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:47.876575    3689 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:54:47 up  1:37,  0 user,  load average: 0.08, 0.07, 0.08
	Linux ha-608611 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 18:54:40 ha-608611 kubelet[1930]: E1009 18:54:40.470610    1930 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:54:40 ha-608611 kubelet[1930]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-608611_kube-system(cc9d45d79042caf53449ab6317965aad): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:40 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:54:40 ha-608611 kubelet[1930]: E1009 18:54:40.470638    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-608611" podUID="cc9d45d79042caf53449ab6317965aad"
	Oct 09 18:54:41 ha-608611 kubelet[1930]: E1009 18:54:41.441458    1930 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 18:54:41 ha-608611 kubelet[1930]: E1009 18:54:41.468106    1930 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:54:41 ha-608611 kubelet[1930]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:41 ha-608611 kubelet[1930]:  > podSandboxID="85e631b34b7cd8e30736ecbe7d81581bf5cedb0c5abd8815458e28a54592f51e"
	Oct 09 18:54:41 ha-608611 kubelet[1930]: E1009 18:54:41.468242    1930 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:54:41 ha-608611 kubelet[1930]:         container etcd start failed in pod etcd-ha-608611_kube-system(b479c8e1034fd1754049af8325a8c50b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:41 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:54:41 ha-608611 kubelet[1930]: E1009 18:54:41.468280    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-608611" podUID="b479c8e1034fd1754049af8325a8c50b"
	Oct 09 18:54:45 ha-608611 kubelet[1930]: E1009 18:54:45.440984    1930 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 18:54:45 ha-608611 kubelet[1930]: E1009 18:54:45.464410    1930 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:54:45 ha-608611 kubelet[1930]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:45 ha-608611 kubelet[1930]:  > podSandboxID="3ed86e3854bad44d01adb07f49466fff61fdf9dd10f223587d539b2547828b70"
	Oct 09 18:54:45 ha-608611 kubelet[1930]: E1009 18:54:45.464511    1930 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:54:45 ha-608611 kubelet[1930]:         container kube-apiserver start failed in pod kube-apiserver-ha-608611_kube-system(8c1c5aee1432fcfd0e6519753fb0d668): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:45 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:54:45 ha-608611 kubelet[1930]: E1009 18:54:45.464543    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-608611" podUID="8c1c5aee1432fcfd0e6519753fb0d668"
	Oct 09 18:54:46 ha-608611 kubelet[1930]: E1009 18:54:46.045748    1930 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 09 18:54:46 ha-608611 kubelet[1930]: E1009 18:54:46.152695    1930 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-608611.186ce72dd5388d27  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-608611,UID:ha-608611,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-608611 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-608611,},FirstTimestamp:2025-10-09 18:48:58.431819047 +0000 UTC m=+0.618197321,LastTimestamp:2025-10-09 18:48:58.431819047 +0000 UTC m=+0.618197321,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-608611,}"
	Oct 09 18:54:47 ha-608611 kubelet[1930]: E1009 18:54:47.081114    1930 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-608611?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 18:54:47 ha-608611 kubelet[1930]: I1009 18:54:47.250003    1930 kubelet_node_status.go:75] "Attempting to register node" node="ha-608611"
	Oct 09 18:54:47 ha-608611 kubelet[1930]: E1009 18:54:47.250375    1930 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-608611"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611: exit status 6 (299.564016ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:54:48.250809   76884 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-608611" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (1.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 status --output json --alsologtostderr -v 5: exit status 6 (286.395039ms)

                                                
                                                
-- stdout --
	{"Name":"ha-608611","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Misconfigured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:54:48.306367   76999 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:54:48.306619   76999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:54:48.306628   76999 out.go:374] Setting ErrFile to fd 2...
	I1009 18:54:48.306632   76999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:54:48.306835   76999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:54:48.307016   76999 out.go:368] Setting JSON to true
	I1009 18:54:48.307040   76999 mustload.go:65] Loading cluster: ha-608611
	I1009 18:54:48.307162   76999 notify.go:220] Checking for updates...
	I1009 18:54:48.307356   76999 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:54:48.307368   76999 status.go:174] checking status of ha-608611 ...
	I1009 18:54:48.307756   76999 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:54:48.327176   76999 status.go:371] ha-608611 host status = "Running" (err=<nil>)
	I1009 18:54:48.327198   76999 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:54:48.327434   76999 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:54:48.345164   76999 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:54:48.345415   76999 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:54:48.345451   76999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:54:48.363566   76999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:54:48.463315   76999 ssh_runner.go:195] Run: systemctl --version
	I1009 18:54:48.469477   76999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:54:48.481237   76999 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:54:48.536530   76999 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:54:48.527392937 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1009 18:54:48.536939   76999 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:54:48.536964   76999 api_server.go:166] Checking apiserver status ...
	I1009 18:54:48.537003   76999 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 18:54:48.546970   76999 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:54:48.546993   76999 status.go:463] ha-608611 apiserver status = Running (err=<nil>)
	I1009 18:54:48.547023   76999 status.go:176] ha-608611 status: &{Name:ha-608611 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:330: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-608611 status --output json --alsologtostderr -v 5" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-608611
helpers_test.go:243: (dbg) docker inspect ha-608611:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	        "Created": "2025-10-09T18:44:43.71277862Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 68571,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:44:43.760299717Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hostname",
	        "HostsPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hosts",
	        "LogPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c-json.log",
	        "Name": "/ha-608611",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-608611:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-608611",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	                "LowerDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-608611",
	                "Source": "/var/lib/docker/volumes/ha-608611/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-608611",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-608611",
	                "name.minikube.sigs.k8s.io": "ha-608611",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4f6557069285c9379d4788b404b85a7f7332b0f0915fb426eb2d3ffb6f02df65",
	            "SandboxKey": "/var/run/docker/netns/4f6557069285",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-608611": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:dc:55:21:78:3f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d41ad8abecfe5e57fea462a2d7f6665aa3879de8bfc3fe0269f712186c14e257",
	                    "EndpointID": "322add21e309d24bef79b6b7f428ea8a1994c3d46e02d36bb4debf9950e6c0a5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-608611",
	                        "92fc23109156"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611: exit status 6 (285.873632ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:54:48.841803   77139 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image          │ functional-753440 image ls                                                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ delete         │ -p functional-753440                                                                                            │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:44 UTC │ 09 Oct 25 18:44 UTC │
	│ start          │ ha-608611 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:44 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- rollout status deployment/busybox                                                          │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node           │ ha-608611 node add --alsologtostderr -v 5                                                                       │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:44:38
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:44:38.499708   68004 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:44:38.499979   68004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:44:38.499990   68004 out.go:374] Setting ErrFile to fd 2...
	I1009 18:44:38.499995   68004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:44:38.500193   68004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:44:38.500672   68004 out.go:368] Setting JSON to false
	I1009 18:44:38.501534   68004 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5226,"bootTime":1760030252,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:44:38.501651   68004 start.go:141] virtualization: kvm guest
	I1009 18:44:38.503753   68004 out.go:179] * [ha-608611] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:44:38.505161   68004 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:44:38.505174   68004 notify.go:220] Checking for updates...
	I1009 18:44:38.507971   68004 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:44:38.509361   68004 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:44:38.510823   68004 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:44:38.512241   68004 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:44:38.513815   68004 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:44:38.515465   68004 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:44:38.539241   68004 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:44:38.539344   68004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:44:38.597491   68004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:44:38.585969456 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:44:38.597607   68004 docker.go:318] overlay module found
	I1009 18:44:38.599712   68004 out.go:179] * Using the docker driver based on user configuration
	I1009 18:44:38.601190   68004 start.go:305] selected driver: docker
	I1009 18:44:38.601208   68004 start.go:925] validating driver "docker" against <nil>
	I1009 18:44:38.601220   68004 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:44:38.601773   68004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:44:38.656624   68004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:44:38.646723999 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:44:38.656772   68004 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 18:44:38.656973   68004 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:44:38.658777   68004 out.go:179] * Using Docker driver with root privileges
	I1009 18:44:38.660475   68004 cni.go:84] Creating CNI manager for ""
	I1009 18:44:38.660538   68004 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 18:44:38.660548   68004 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:44:38.660625   68004 start.go:349] cluster config:
	{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1009 18:44:38.662228   68004 out.go:179] * Starting "ha-608611" primary control-plane node in "ha-608611" cluster
	I1009 18:44:38.663758   68004 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:44:38.665163   68004 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:44:38.666518   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:38.666553   68004 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:44:38.666561   68004 cache.go:64] Caching tarball of preloaded images
	I1009 18:44:38.666652   68004 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:44:38.666665   68004 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:44:38.666636   68004 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:44:38.667052   68004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:44:38.667080   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json: {Name:mk7eb36c0f629760ce25ed6ea0be36fe97501d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:38.687956   68004 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 18:44:38.687977   68004 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 18:44:38.687999   68004 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:44:38.688029   68004 start.go:360] acquireMachinesLock for ha-608611: {Name:mk7579977ab708dc80cadd5f1683dbd9d0a08d4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:44:38.688196   68004 start.go:364] duration metric: took 118.358µs to acquireMachinesLock for "ha-608611"
	I1009 18:44:38.688228   68004 start.go:93] Provisioning new machine with config: &{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:44:38.688308   68004 start.go:125] createHost starting for "" (driver="docker")
	I1009 18:44:38.690596   68004 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 18:44:38.690877   68004 start.go:159] libmachine.API.Create for "ha-608611" (driver="docker")
	I1009 18:44:38.690915   68004 client.go:168] LocalClient.Create starting
	I1009 18:44:38.691016   68004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem
	I1009 18:44:38.691065   68004 main.go:141] libmachine: Decoding PEM data...
	I1009 18:44:38.691090   68004 main.go:141] libmachine: Parsing certificate...
	I1009 18:44:38.691160   68004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem
	I1009 18:44:38.691207   68004 main.go:141] libmachine: Decoding PEM data...
	I1009 18:44:38.691219   68004 main.go:141] libmachine: Parsing certificate...
	I1009 18:44:38.691649   68004 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:44:38.708961   68004 cli_runner.go:211] docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:44:38.709049   68004 network_create.go:284] running [docker network inspect ha-608611] to gather additional debugging logs...
	I1009 18:44:38.709068   68004 cli_runner.go:164] Run: docker network inspect ha-608611
	W1009 18:44:38.724919   68004 cli_runner.go:211] docker network inspect ha-608611 returned with exit code 1
	I1009 18:44:38.724948   68004 network_create.go:287] error running [docker network inspect ha-608611]: docker network inspect ha-608611: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-608611 not found
	I1009 18:44:38.724959   68004 network_create.go:289] output of [docker network inspect ha-608611]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-608611 not found
	
	** /stderr **
	I1009 18:44:38.725077   68004 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:44:38.743440   68004 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e06100}
	I1009 18:44:38.743492   68004 network_create.go:124] attempt to create docker network ha-608611 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 18:44:38.743548   68004 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-608611 ha-608611
	I1009 18:44:38.802772   68004 network_create.go:108] docker network ha-608611 192.168.49.0/24 created
	I1009 18:44:38.802822   68004 kic.go:121] calculated static IP "192.168.49.2" for the "ha-608611" container
	I1009 18:44:38.802881   68004 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:44:38.820080   68004 cli_runner.go:164] Run: docker volume create ha-608611 --label name.minikube.sigs.k8s.io=ha-608611 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:44:38.840522   68004 oci.go:103] Successfully created a docker volume ha-608611
	I1009 18:44:38.840615   68004 cli_runner.go:164] Run: docker run --rm --name ha-608611-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-608611 --entrypoint /usr/bin/test -v ha-608611:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 18:44:39.244353   68004 oci.go:107] Successfully prepared a docker volume ha-608611
	I1009 18:44:39.244424   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:39.244433   68004 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 18:44:39.244478   68004 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-608611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 18:44:43.640122   68004 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-608611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.39557595s)
	I1009 18:44:43.640175   68004 kic.go:203] duration metric: took 4.395736393s to extract preloaded images to volume ...
	W1009 18:44:43.640303   68004 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 18:44:43.640358   68004 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 18:44:43.640405   68004 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:44:43.696295   68004 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-608611 --name ha-608611 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-608611 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-608611 --network ha-608611 --ip 192.168.49.2 --volume ha-608611:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 18:44:43.979679   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Running}}
	I1009 18:44:43.998229   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.017435   68004 cli_runner.go:164] Run: docker exec ha-608611 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:44:44.066674   68004 oci.go:144] the created container "ha-608611" has a running status.
	I1009 18:44:44.066704   68004 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa...
	I1009 18:44:44.380025   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 18:44:44.380087   68004 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:44:44.405345   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.425476   68004 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:44:44.425501   68004 kic_runner.go:114] Args: [docker exec --privileged ha-608611 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:44:44.469260   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.488635   68004 machine.go:93] provisionDockerMachine start ...
	I1009 18:44:44.488729   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.507225   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.507570   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.507596   68004 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:44:44.655038   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:44:44.655067   68004 ubuntu.go:182] provisioning hostname "ha-608611"
	I1009 18:44:44.655128   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.673982   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.674208   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.674222   68004 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-608611 && echo "ha-608611" | sudo tee /etc/hostname
	I1009 18:44:44.830321   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:44:44.830415   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.848252   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.848464   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.848481   68004 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-608611' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-608611/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-608611' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:44:44.995953   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:44:44.995980   68004 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 18:44:44.995996   68004 ubuntu.go:190] setting up certificates
	I1009 18:44:44.996004   68004 provision.go:84] configureAuth start
	I1009 18:44:44.996061   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.014319   68004 provision.go:143] copyHostCerts
	I1009 18:44:45.014359   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:44:45.014401   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 18:44:45.014411   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:44:45.014491   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 18:44:45.014585   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:44:45.014614   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 18:44:45.014624   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:44:45.014668   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 18:44:45.014744   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:44:45.014769   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 18:44:45.014773   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:44:45.014812   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 18:44:45.014890   68004 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.ha-608611 san=[127.0.0.1 192.168.49.2 ha-608611 localhost minikube]
	I1009 18:44:45.062086   68004 provision.go:177] copyRemoteCerts
	I1009 18:44:45.062191   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:44:45.062224   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.079568   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.182503   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 18:44:45.182590   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 18:44:45.201898   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 18:44:45.201952   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 18:44:45.219004   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 18:44:45.219061   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:44:45.236354   68004 provision.go:87] duration metric: took 240.321663ms to configureAuth
	I1009 18:44:45.236386   68004 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:44:45.236591   68004 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:44:45.236715   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.255084   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:45.255329   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:45.255352   68004 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:44:45.508555   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:44:45.508584   68004 machine.go:96] duration metric: took 1.01992839s to provisionDockerMachine
	I1009 18:44:45.508595   68004 client.go:171] duration metric: took 6.817674141s to LocalClient.Create
	I1009 18:44:45.508615   68004 start.go:167] duration metric: took 6.817737923s to libmachine.API.Create "ha-608611"
	I1009 18:44:45.508627   68004 start.go:293] postStartSetup for "ha-608611" (driver="docker")
	I1009 18:44:45.508641   68004 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:44:45.508698   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:44:45.508733   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.526223   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.630313   68004 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:44:45.633862   68004 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:44:45.633886   68004 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:44:45.633896   68004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 18:44:45.633937   68004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 18:44:45.634010   68004 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 18:44:45.634020   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /etc/ssl/certs/148802.pem
	I1009 18:44:45.634128   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:44:45.641735   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:44:45.661588   68004 start.go:296] duration metric: took 152.943683ms for postStartSetup
	I1009 18:44:45.661893   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.680048   68004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:44:45.680316   68004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:44:45.680352   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.696877   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.796243   68004 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:44:45.800700   68004 start.go:128] duration metric: took 7.112375109s to createHost
	I1009 18:44:45.800729   68004 start.go:83] releasing machines lock for "ha-608611", held for 7.112518345s
	I1009 18:44:45.800791   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.818595   68004 ssh_runner.go:195] Run: cat /version.json
	I1009 18:44:45.818630   68004 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:44:45.818641   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.818688   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.836603   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.836837   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.989177   68004 ssh_runner.go:195] Run: systemctl --version
	I1009 18:44:45.995896   68004 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:44:46.030619   68004 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:44:46.035429   68004 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:44:46.035494   68004 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:44:46.061922   68004 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 18:44:46.061944   68004 start.go:495] detecting cgroup driver to use...
	I1009 18:44:46.061975   68004 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:44:46.062026   68004 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:44:46.077423   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:44:46.089316   68004 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:44:46.089367   68004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:44:46.105696   68004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:44:46.122777   68004 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:44:46.202639   68004 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:44:46.294647   68004 docker.go:234] disabling docker service ...
	I1009 18:44:46.294704   68004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:44:46.312549   68004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:44:46.324800   68004 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:44:46.403433   68004 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:44:46.481222   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:44:46.493645   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:44:46.507931   68004 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:44:46.507979   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.518504   68004 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 18:44:46.518561   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.527328   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.535888   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.544437   68004 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:44:46.552112   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.560275   68004 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.573155   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.581642   68004 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:44:46.588485   68004 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:44:46.595486   68004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:44:46.674187   68004 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:44:46.778236   68004 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:44:46.778294   68004 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:44:46.782264   68004 start.go:563] Will wait 60s for crictl version
	I1009 18:44:46.782319   68004 ssh_runner.go:195] Run: which crictl
	I1009 18:44:46.785887   68004 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:44:46.809717   68004 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:44:46.809792   68004 ssh_runner.go:195] Run: crio --version
	I1009 18:44:46.837446   68004 ssh_runner.go:195] Run: crio --version
	I1009 18:44:46.867516   68004 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:44:46.869002   68004 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:44:46.886298   68004 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:44:46.890354   68004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:44:46.901206   68004 kubeadm.go:883] updating cluster {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:44:46.901331   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:46.901390   68004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:44:46.933183   68004 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:44:46.933203   68004 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:44:46.933255   68004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:44:46.959025   68004 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:44:46.959053   68004 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:44:46.959062   68004 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 18:44:46.959174   68004 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-608611 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:44:46.959248   68004 ssh_runner.go:195] Run: crio config
	I1009 18:44:47.005223   68004 cni.go:84] Creating CNI manager for ""
	I1009 18:44:47.005245   68004 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 18:44:47.005269   68004 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:44:47.005302   68004 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-608611 NodeName:ha-608611 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:44:47.005420   68004 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-608611"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:44:47.005441   68004 kube-vip.go:115] generating kube-vip config ...
	I1009 18:44:47.005483   68004 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 18:44:47.017646   68004 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:44:47.017751   68004 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1009 18:44:47.017813   68004 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:44:47.025763   68004 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:44:47.025815   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 18:44:47.033769   68004 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 18:44:47.046390   68004 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:44:47.062352   68004 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 18:44:47.075248   68004 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1009 18:44:47.090154   68004 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 18:44:47.093985   68004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:44:47.104234   68004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:44:47.185443   68004 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:44:47.207477   68004 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611 for IP: 192.168.49.2
	I1009 18:44:47.207503   68004 certs.go:195] generating shared ca certs ...
	I1009 18:44:47.207525   68004 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.207676   68004 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 18:44:47.207726   68004 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 18:44:47.207736   68004 certs.go:257] generating profile certs ...
	I1009 18:44:47.207784   68004 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key
	I1009 18:44:47.207802   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt with IP's: []
	I1009 18:44:47.296415   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt ...
	I1009 18:44:47.296444   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt: {Name:mka7495c49ff81b322387640c5f8be05bb8b97aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.296615   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key ...
	I1009 18:44:47.296627   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key: {Name:mk151a9783426d352762013576861912ee213cd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.296698   68004 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3
	I1009 18:44:47.296712   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1009 18:44:47.614912   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 ...
	I1009 18:44:47.614937   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3: {Name:mkf40b70da82ca6969886952002da4a653b30ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.615095   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3 ...
	I1009 18:44:47.615110   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3: {Name:mkd83b705c3cec74b71d7424d9484d8c52a44a8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.615192   68004 certs.go:382] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt
	I1009 18:44:47.615283   68004 certs.go:386] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3 -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key
	I1009 18:44:47.615388   68004 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key
	I1009 18:44:47.615408   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt with IP's: []
	I1009 18:44:47.855559   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt ...
	I1009 18:44:47.855590   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt: {Name:mkb45be1e91a0e10b00b60bd353288b3ec0a365b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.855750   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key ...
	I1009 18:44:47.855762   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key: {Name:mk173c05f4fc9659f1f76c6f2e2f3e956fd65bbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.855826   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 18:44:47.855839   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 18:44:47.855850   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 18:44:47.855863   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 18:44:47.855878   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 18:44:47.855890   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 18:44:47.855902   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 18:44:47.855914   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 18:44:47.855955   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 18:44:47.855989   68004 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 18:44:47.855998   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:44:47.856027   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 18:44:47.856050   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:44:47.856071   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 18:44:47.856108   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:44:47.856132   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:47.856159   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem -> /usr/share/ca-certificates/14880.pem
	I1009 18:44:47.856171   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /usr/share/ca-certificates/148802.pem
	I1009 18:44:47.856652   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:44:47.875170   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:44:47.892939   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:44:47.910593   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:44:47.927971   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 18:44:47.945367   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:44:47.962453   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:44:47.979768   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:44:47.996498   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:44:48.015667   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 18:44:48.032775   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 18:44:48.049777   68004 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:44:48.062232   68004 ssh_runner.go:195] Run: openssl version
	I1009 18:44:48.068333   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 18:44:48.076746   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.080306   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.080361   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.114497   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:44:48.123987   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:44:48.134109   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.138265   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.138325   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.173947   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:44:48.182505   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 18:44:48.190879   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.194449   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.194493   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.227813   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 18:44:48.236520   68004 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:44:48.239954   68004 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 18:44:48.240015   68004 kubeadm.go:400] StartCluster: {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:44:48.240093   68004 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:44:48.240133   68004 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:44:48.266457   68004 cri.go:89] found id: ""
	I1009 18:44:48.266520   68004 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:44:48.274981   68004 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:44:48.282927   68004 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:44:48.282975   68004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:44:48.290558   68004 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:44:48.290617   68004 kubeadm.go:157] found existing configuration files:
	
	I1009 18:44:48.290662   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:44:48.297883   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:44:48.297940   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:44:48.305298   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:44:48.312630   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:44:48.312685   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:44:48.320277   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:44:48.328028   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:44:48.328075   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:44:48.335714   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:44:48.343631   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:44:48.343682   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:44:48.351389   68004 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:44:48.409985   68004 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:44:48.468687   68004 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:48:52.176412   68004 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1009 18:48:52.176606   68004 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:48:52.179343   68004 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:48:52.179469   68004 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:48:52.179692   68004 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:48:52.179825   68004 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:48:52.179919   68004 kubeadm.go:318] OS: Linux
	I1009 18:48:52.180033   68004 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:48:52.180167   68004 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:48:52.180261   68004 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:48:52.180339   68004 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:48:52.180423   68004 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:48:52.180506   68004 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:48:52.180585   68004 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:48:52.180650   68004 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:48:52.180730   68004 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:48:52.180858   68004 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:48:52.181038   68004 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:48:52.181129   68004 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:48:52.183066   68004 out.go:252]   - Generating certificates and keys ...
	I1009 18:48:52.183199   68004 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:48:52.183278   68004 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:48:52.183337   68004 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 18:48:52.183388   68004 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 18:48:52.183456   68004 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 18:48:52.183531   68004 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 18:48:52.183609   68004 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 18:48:52.183734   68004 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:48:52.183814   68004 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 18:48:52.183946   68004 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:48:52.184022   68004 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 18:48:52.184077   68004 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 18:48:52.184120   68004 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 18:48:52.184209   68004 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:48:52.184289   68004 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:48:52.184373   68004 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:48:52.184446   68004 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:48:52.184545   68004 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:48:52.184650   68004 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:48:52.184751   68004 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:48:52.184845   68004 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:48:52.187212   68004 out.go:252]   - Booting up control plane ...
	I1009 18:48:52.187314   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:48:52.187403   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:48:52.187495   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:48:52.187618   68004 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:48:52.187764   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:48:52.187905   68004 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:48:52.188016   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:48:52.188092   68004 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:48:52.188271   68004 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:48:52.188367   68004 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:48:52.188438   68004 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001064091s
	I1009 18:48:52.188532   68004 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:48:52.188631   68004 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 18:48:52.188753   68004 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:48:52.188835   68004 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:48:52.188944   68004 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00065849s
	I1009 18:48:52.189053   68004 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000822023s
	I1009 18:48:52.189176   68004 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00103559s
	I1009 18:48:52.189186   68004 kubeadm.go:318] 
	I1009 18:48:52.189288   68004 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:48:52.189417   68004 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:48:52.189507   68004 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:48:52.189604   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:48:52.189710   68004 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:48:52.189827   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:48:52.189851   68004 kubeadm.go:318] 
	W1009 18:48:52.189997   68004 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001064091s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00065849s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000822023s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00103559s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 18:48:52.190074   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 18:48:54.957990   68004 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.767888592s)
	I1009 18:48:54.958062   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:48:54.971165   68004 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:48:54.971216   68004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:48:54.979630   68004 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:48:54.979649   68004 kubeadm.go:157] found existing configuration files:
	
	I1009 18:48:54.979696   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:48:54.987819   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:48:54.987884   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:48:54.995953   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:48:55.003976   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:48:55.004081   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:48:55.011851   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:48:55.019991   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:48:55.020043   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:48:55.027959   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:48:55.036070   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:48:55.036117   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:48:55.043823   68004 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:48:55.102132   68004 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:48:55.161990   68004 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:52:58.820119   68004 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 18:52:58.820247   68004 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:52:58.823463   68004 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:52:58.823551   68004 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:52:58.823686   68004 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:52:58.823770   68004 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:52:58.823834   68004 kubeadm.go:318] OS: Linux
	I1009 18:52:58.823882   68004 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:52:58.823967   68004 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:52:58.824039   68004 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:52:58.824112   68004 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:52:58.824209   68004 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:52:58.824278   68004 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:52:58.824339   68004 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:52:58.824385   68004 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:52:58.824446   68004 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:52:58.824525   68004 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:52:58.824621   68004 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:52:58.824718   68004 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:52:58.828177   68004 out.go:252]   - Generating certificates and keys ...
	I1009 18:52:58.828267   68004 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:52:58.828359   68004 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:52:58.828476   68004 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 18:52:58.828530   68004 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 18:52:58.828586   68004 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 18:52:58.828629   68004 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 18:52:58.828684   68004 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 18:52:58.828737   68004 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 18:52:58.828800   68004 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 18:52:58.828859   68004 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 18:52:58.828890   68004 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 18:52:58.828973   68004 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:52:58.829058   68004 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:52:58.829168   68004 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:52:58.829228   68004 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:52:58.829307   68004 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:52:58.829375   68004 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:52:58.829446   68004 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:52:58.829507   68004 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:52:58.830918   68004 out.go:252]   - Booting up control plane ...
	I1009 18:52:58.831004   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:52:58.831088   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:52:58.831162   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:52:58.831271   68004 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:52:58.831374   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:52:58.831475   68004 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:52:58.831547   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:52:58.831602   68004 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:52:58.831715   68004 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:52:58.831812   68004 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:52:58.831876   68004 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000946171s
	I1009 18:52:58.831960   68004 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:52:58.832028   68004 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 18:52:58.832113   68004 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:52:58.832207   68004 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:52:58.832277   68004 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	I1009 18:52:58.832347   68004 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	I1009 18:52:58.832422   68004 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	I1009 18:52:58.832428   68004 kubeadm.go:318] 
	I1009 18:52:58.832506   68004 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:52:58.832579   68004 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:52:58.832656   68004 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:52:58.832741   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:52:58.832805   68004 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:52:58.832888   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:52:58.832970   68004 kubeadm.go:402] duration metric: took 8m10.592960723s to StartCluster
	I1009 18:52:58.832981   68004 kubeadm.go:318] 
	I1009 18:52:58.833031   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:52:58.833085   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:52:58.861225   68004 cri.go:89] found id: ""
	I1009 18:52:58.861266   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.861281   68004 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:52:58.861287   68004 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:52:58.861341   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:52:58.888167   68004 cri.go:89] found id: ""
	I1009 18:52:58.888195   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.888205   68004 logs.go:284] No container was found matching "etcd"
	I1009 18:52:58.888212   68004 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:52:58.888287   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:52:58.914349   68004 cri.go:89] found id: ""
	I1009 18:52:58.914374   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.914384   68004 logs.go:284] No container was found matching "coredns"
	I1009 18:52:58.914390   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:52:58.914453   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:52:58.940856   68004 cri.go:89] found id: ""
	I1009 18:52:58.940884   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.940892   68004 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:52:58.940898   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:52:58.940949   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:52:58.967634   68004 cri.go:89] found id: ""
	I1009 18:52:58.967660   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.967668   68004 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:52:58.967675   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:52:58.967737   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:52:58.994857   68004 cri.go:89] found id: ""
	I1009 18:52:58.994884   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.994892   68004 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:52:58.994897   68004 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:52:58.994951   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:52:59.022250   68004 cri.go:89] found id: ""
	I1009 18:52:59.022280   68004 logs.go:282] 0 containers: []
	W1009 18:52:59.022296   68004 logs.go:284] No container was found matching "kindnet"
	I1009 18:52:59.022305   68004 logs.go:123] Gathering logs for container status ...
	I1009 18:52:59.022316   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:52:59.050362   68004 logs.go:123] Gathering logs for kubelet ...
	I1009 18:52:59.050466   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:52:59.114521   68004 logs.go:123] Gathering logs for dmesg ...
	I1009 18:52:59.114560   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:52:59.126721   68004 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:52:59.126746   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:52:59.184497   68004 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:52:59.177217    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.177807    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179451    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179888    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.181458    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:52:59.177217    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.177807    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179451    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179888    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.181458    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:52:59.184526   68004 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:52:59.184536   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1009 18:52:59.243650   68004 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 18:52:59.243716   68004 out.go:285] * 
	W1009 18:52:59.243784   68004 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:52:59.243799   68004 out.go:285] * 
	W1009 18:52:59.245479   68004 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:52:59.249165   68004 out.go:203] 
	W1009 18:52:59.250590   68004 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:52:59.250620   68004 out.go:285] * 
	I1009 18:52:59.252112   68004 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.468168675Z" level=info msg="createCtr: removing container 5b0ee951a8d30dc41cbe0e80f8fd65534c65d3a6b97e8d5542e2681b411dba7d" id=8917b73c-c4e6-4e87-8d87-409c0fa122c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.468207713Z" level=info msg="createCtr: deleting container 5b0ee951a8d30dc41cbe0e80f8fd65534c65d3a6b97e8d5542e2681b411dba7d from storage" id=8917b73c-c4e6-4e87-8d87-409c0fa122c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:40 ha-608611 crio[779]: time="2025-10-09T18:54:40.470223387Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-608611_kube-system_cc9d45d79042caf53449ab6317965aad_0" id=8917b73c-c4e6-4e87-8d87-409c0fa122c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.441918254Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=5f830916-7502-45c7-a992-b1afe6a4ec2f name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.442961662Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=ce442719-daad-4875-88bf-1eae8be1d0eb name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.443900487Z" level=info msg="Creating container: kube-system/etcd-ha-608611/etcd" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.444174088Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.448745276Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.449318807Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.46398444Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.465375584Z" level=info msg="createCtr: deleting container ID 83743aebcddc36aef5c02af3dcd233f5d07925ba9d0281ad1316ac7a648aa44c from idIndex" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.465420508Z" level=info msg="createCtr: removing container 83743aebcddc36aef5c02af3dcd233f5d07925ba9d0281ad1316ac7a648aa44c" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.465459824Z" level=info msg="createCtr: deleting container 83743aebcddc36aef5c02af3dcd233f5d07925ba9d0281ad1316ac7a648aa44c from storage" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.467757138Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-608611_kube-system_b479c8e1034fd1754049af8325a8c50b_0" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.441485805Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=30142e19-bbd7-4eb1-b9bc-3f7fd8b15d13 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.442431482Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=bc06eb87-f8e1-4752-90ce-f306d71bb12c name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.443389229Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-608611/kube-apiserver" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.443682696Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.446968447Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.447385153Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.460272538Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.461764017Z" level=info msg="createCtr: deleting container ID c4531b33398cdc11b3df5f5c569221cb658215b7f587bf4d85d9449bd3ddd90e from idIndex" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.461810281Z" level=info msg="createCtr: removing container c4531b33398cdc11b3df5f5c569221cb658215b7f587bf4d85d9449bd3ddd90e" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.461842736Z" level=info msg="createCtr: deleting container c4531b33398cdc11b3df5f5c569221cb658215b7f587bf4d85d9449bd3ddd90e from storage" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.464060722Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-608611_kube-system_8c1c5aee1432fcfd0e6519753fb0d668_0" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:54:49.421243    3859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:49.421827    3859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:49.423486    3859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:49.424016    3859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:49.425696    3859 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:54:49 up  1:37,  0 user,  load average: 0.48, 0.15, 0.11
	Linux ha-608611 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 18:54:40 ha-608611 kubelet[1930]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-608611_kube-system(cc9d45d79042caf53449ab6317965aad): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:40 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:54:40 ha-608611 kubelet[1930]: E1009 18:54:40.470638    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-608611" podUID="cc9d45d79042caf53449ab6317965aad"
	Oct 09 18:54:41 ha-608611 kubelet[1930]: E1009 18:54:41.441458    1930 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 18:54:41 ha-608611 kubelet[1930]: E1009 18:54:41.468106    1930 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:54:41 ha-608611 kubelet[1930]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:41 ha-608611 kubelet[1930]:  > podSandboxID="85e631b34b7cd8e30736ecbe7d81581bf5cedb0c5abd8815458e28a54592f51e"
	Oct 09 18:54:41 ha-608611 kubelet[1930]: E1009 18:54:41.468242    1930 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:54:41 ha-608611 kubelet[1930]:         container etcd start failed in pod etcd-ha-608611_kube-system(b479c8e1034fd1754049af8325a8c50b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:41 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:54:41 ha-608611 kubelet[1930]: E1009 18:54:41.468280    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-608611" podUID="b479c8e1034fd1754049af8325a8c50b"
	Oct 09 18:54:45 ha-608611 kubelet[1930]: E1009 18:54:45.440984    1930 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 18:54:45 ha-608611 kubelet[1930]: E1009 18:54:45.464410    1930 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:54:45 ha-608611 kubelet[1930]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:45 ha-608611 kubelet[1930]:  > podSandboxID="3ed86e3854bad44d01adb07f49466fff61fdf9dd10f223587d539b2547828b70"
	Oct 09 18:54:45 ha-608611 kubelet[1930]: E1009 18:54:45.464511    1930 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:54:45 ha-608611 kubelet[1930]:         container kube-apiserver start failed in pod kube-apiserver-ha-608611_kube-system(8c1c5aee1432fcfd0e6519753fb0d668): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:45 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:54:45 ha-608611 kubelet[1930]: E1009 18:54:45.464543    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-608611" podUID="8c1c5aee1432fcfd0e6519753fb0d668"
	Oct 09 18:54:46 ha-608611 kubelet[1930]: E1009 18:54:46.045748    1930 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 09 18:54:46 ha-608611 kubelet[1930]: E1009 18:54:46.152695    1930 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-608611.186ce72dd5388d27  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-608611,UID:ha-608611,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-608611 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-608611,},FirstTimestamp:2025-10-09 18:48:58.431819047 +0000 UTC m=+0.618197321,LastTimestamp:2025-10-09 18:48:58.431819047 +0000 UTC m=+0.618197321,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-608611,}"
	Oct 09 18:54:47 ha-608611 kubelet[1930]: E1009 18:54:47.081114    1930 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-608611?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 18:54:47 ha-608611 kubelet[1930]: I1009 18:54:47.250003    1930 kubelet_node_status.go:75] "Attempting to register node" node="ha-608611"
	Oct 09 18:54:47 ha-608611 kubelet[1930]: E1009 18:54:47.250375    1930 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-608611"
	Oct 09 18:54:48 ha-608611 kubelet[1930]: E1009 18:54:48.459131    1930 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-608611\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611: exit status 6 (288.289419ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:54:49.786157   77465 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-608611" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (1.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (1.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 node stop m02 --alsologtostderr -v 5: exit status 85 (55.511517ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:54:49.842035   77577 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:54:49.842332   77577 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:54:49.842341   77577 out.go:374] Setting ErrFile to fd 2...
	I1009 18:54:49.842346   77577 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:54:49.842524   77577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:54:49.842781   77577 mustload.go:65] Loading cluster: ha-608611
	I1009 18:54:49.843121   77577 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:54:49.845193   77577 out.go:203] 
	W1009 18:54:49.846527   77577 out.go:285] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1009 18:54:49.846539   77577 out.go:285] * 
	* 
	W1009 18:54:49.849668   77577 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:54:49.851019   77577 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-608611 node stop m02 --alsologtostderr -v 5": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5: exit status 6 (286.628195ms)

                                                
                                                
-- stdout --
	ha-608611
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:54:49.897594   77588 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:54:49.897789   77588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:54:49.897797   77588 out.go:374] Setting ErrFile to fd 2...
	I1009 18:54:49.897800   77588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:54:49.897991   77588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:54:49.898214   77588 out.go:368] Setting JSON to false
	I1009 18:54:49.898241   77588 mustload.go:65] Loading cluster: ha-608611
	I1009 18:54:49.898323   77588 notify.go:220] Checking for updates...
	I1009 18:54:49.898538   77588 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:54:49.898550   77588 status.go:174] checking status of ha-608611 ...
	I1009 18:54:49.898976   77588 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:54:49.920122   77588 status.go:371] ha-608611 host status = "Running" (err=<nil>)
	I1009 18:54:49.920153   77588 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:54:49.920404   77588 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:54:49.938078   77588 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:54:49.938380   77588 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:54:49.938421   77588 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:54:49.956063   77588 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:54:50.056279   77588 ssh_runner.go:195] Run: systemctl --version
	I1009 18:54:50.062298   77588 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:54:50.074451   77588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:54:50.128418   77588 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:54:50.117976172 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1009 18:54:50.128845   77588 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:54:50.128869   77588 api_server.go:166] Checking apiserver status ...
	I1009 18:54:50.128913   77588 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 18:54:50.138848   77588 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:54:50.138872   77588 status.go:463] ha-608611 apiserver status = Running (err=<nil>)
	I1009 18:54:50.138884   77588 status.go:176] ha-608611 status: &{Name:ha-608611 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:374: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-608611
helpers_test.go:243: (dbg) docker inspect ha-608611:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	        "Created": "2025-10-09T18:44:43.71277862Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 68571,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:44:43.760299717Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hostname",
	        "HostsPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hosts",
	        "LogPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c-json.log",
	        "Name": "/ha-608611",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-608611:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-608611",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	                "LowerDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-608611",
	                "Source": "/var/lib/docker/volumes/ha-608611/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-608611",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-608611",
	                "name.minikube.sigs.k8s.io": "ha-608611",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4f6557069285c9379d4788b404b85a7f7332b0f0915fb426eb2d3ffb6f02df65",
	            "SandboxKey": "/var/run/docker/netns/4f6557069285",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-608611": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:dc:55:21:78:3f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d41ad8abecfe5e57fea462a2d7f6665aa3879de8bfc3fe0269f712186c14e257",
	                    "EndpointID": "322add21e309d24bef79b6b7f428ea8a1994c3d46e02d36bb4debf9950e6c0a5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-608611",
	                        "92fc23109156"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611: exit status 6 (285.741694ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:54:50.432926   77713 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image          │ functional-753440 image ls                                                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ delete         │ -p functional-753440                                                                                            │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:44 UTC │ 09 Oct 25 18:44 UTC │
	│ start          │ ha-608611 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:44 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- rollout status deployment/busybox                                                          │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node           │ ha-608611 node add --alsologtostderr -v 5                                                                       │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node           │ ha-608611 node stop m02 --alsologtostderr -v 5                                                                  │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:44:38
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:44:38.499708   68004 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:44:38.499979   68004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:44:38.499990   68004 out.go:374] Setting ErrFile to fd 2...
	I1009 18:44:38.499995   68004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:44:38.500193   68004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:44:38.500672   68004 out.go:368] Setting JSON to false
	I1009 18:44:38.501534   68004 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5226,"bootTime":1760030252,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:44:38.501651   68004 start.go:141] virtualization: kvm guest
	I1009 18:44:38.503753   68004 out.go:179] * [ha-608611] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:44:38.505161   68004 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:44:38.505174   68004 notify.go:220] Checking for updates...
	I1009 18:44:38.507971   68004 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:44:38.509361   68004 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:44:38.510823   68004 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:44:38.512241   68004 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:44:38.513815   68004 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:44:38.515465   68004 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:44:38.539241   68004 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:44:38.539344   68004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:44:38.597491   68004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:44:38.585969456 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:44:38.597607   68004 docker.go:318] overlay module found
	I1009 18:44:38.599712   68004 out.go:179] * Using the docker driver based on user configuration
	I1009 18:44:38.601190   68004 start.go:305] selected driver: docker
	I1009 18:44:38.601208   68004 start.go:925] validating driver "docker" against <nil>
	I1009 18:44:38.601220   68004 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:44:38.601773   68004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:44:38.656624   68004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:44:38.646723999 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:44:38.656772   68004 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 18:44:38.656973   68004 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:44:38.658777   68004 out.go:179] * Using Docker driver with root privileges
	I1009 18:44:38.660475   68004 cni.go:84] Creating CNI manager for ""
	I1009 18:44:38.660538   68004 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 18:44:38.660548   68004 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:44:38.660625   68004 start.go:349] cluster config:
	{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1009 18:44:38.662228   68004 out.go:179] * Starting "ha-608611" primary control-plane node in "ha-608611" cluster
	I1009 18:44:38.663758   68004 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:44:38.665163   68004 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:44:38.666518   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:38.666553   68004 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:44:38.666561   68004 cache.go:64] Caching tarball of preloaded images
	I1009 18:44:38.666652   68004 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:44:38.666665   68004 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:44:38.666636   68004 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:44:38.667052   68004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:44:38.667080   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json: {Name:mk7eb36c0f629760ce25ed6ea0be36fe97501d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:38.687956   68004 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 18:44:38.687977   68004 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 18:44:38.687999   68004 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:44:38.688029   68004 start.go:360] acquireMachinesLock for ha-608611: {Name:mk7579977ab708dc80cadd5f1683dbd9d0a08d4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:44:38.688196   68004 start.go:364] duration metric: took 118.358µs to acquireMachinesLock for "ha-608611"
	I1009 18:44:38.688228   68004 start.go:93] Provisioning new machine with config: &{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:44:38.688308   68004 start.go:125] createHost starting for "" (driver="docker")
	I1009 18:44:38.690596   68004 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 18:44:38.690877   68004 start.go:159] libmachine.API.Create for "ha-608611" (driver="docker")
	I1009 18:44:38.690915   68004 client.go:168] LocalClient.Create starting
	I1009 18:44:38.691016   68004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem
	I1009 18:44:38.691065   68004 main.go:141] libmachine: Decoding PEM data...
	I1009 18:44:38.691090   68004 main.go:141] libmachine: Parsing certificate...
	I1009 18:44:38.691160   68004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem
	I1009 18:44:38.691207   68004 main.go:141] libmachine: Decoding PEM data...
	I1009 18:44:38.691219   68004 main.go:141] libmachine: Parsing certificate...
	I1009 18:44:38.691649   68004 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:44:38.708961   68004 cli_runner.go:211] docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:44:38.709049   68004 network_create.go:284] running [docker network inspect ha-608611] to gather additional debugging logs...
	I1009 18:44:38.709068   68004 cli_runner.go:164] Run: docker network inspect ha-608611
	W1009 18:44:38.724919   68004 cli_runner.go:211] docker network inspect ha-608611 returned with exit code 1
	I1009 18:44:38.724948   68004 network_create.go:287] error running [docker network inspect ha-608611]: docker network inspect ha-608611: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-608611 not found
	I1009 18:44:38.724959   68004 network_create.go:289] output of [docker network inspect ha-608611]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-608611 not found
	
	** /stderr **
	I1009 18:44:38.725077   68004 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:44:38.743440   68004 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e06100}
	I1009 18:44:38.743492   68004 network_create.go:124] attempt to create docker network ha-608611 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 18:44:38.743548   68004 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-608611 ha-608611
	I1009 18:44:38.802772   68004 network_create.go:108] docker network ha-608611 192.168.49.0/24 created
	I1009 18:44:38.802822   68004 kic.go:121] calculated static IP "192.168.49.2" for the "ha-608611" container
	I1009 18:44:38.802881   68004 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:44:38.820080   68004 cli_runner.go:164] Run: docker volume create ha-608611 --label name.minikube.sigs.k8s.io=ha-608611 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:44:38.840522   68004 oci.go:103] Successfully created a docker volume ha-608611
	I1009 18:44:38.840615   68004 cli_runner.go:164] Run: docker run --rm --name ha-608611-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-608611 --entrypoint /usr/bin/test -v ha-608611:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 18:44:39.244353   68004 oci.go:107] Successfully prepared a docker volume ha-608611
	I1009 18:44:39.244424   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:39.244433   68004 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 18:44:39.244478   68004 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-608611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 18:44:43.640122   68004 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-608611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.39557595s)
	I1009 18:44:43.640175   68004 kic.go:203] duration metric: took 4.395736393s to extract preloaded images to volume ...
	W1009 18:44:43.640303   68004 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 18:44:43.640358   68004 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 18:44:43.640405   68004 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:44:43.696295   68004 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-608611 --name ha-608611 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-608611 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-608611 --network ha-608611 --ip 192.168.49.2 --volume ha-608611:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 18:44:43.979679   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Running}}
	I1009 18:44:43.998229   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.017435   68004 cli_runner.go:164] Run: docker exec ha-608611 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:44:44.066674   68004 oci.go:144] the created container "ha-608611" has a running status.
	I1009 18:44:44.066704   68004 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa...
	I1009 18:44:44.380025   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 18:44:44.380087   68004 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:44:44.405345   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.425476   68004 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:44:44.425501   68004 kic_runner.go:114] Args: [docker exec --privileged ha-608611 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:44:44.469260   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.488635   68004 machine.go:93] provisionDockerMachine start ...
	I1009 18:44:44.488729   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.507225   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.507570   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.507596   68004 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:44:44.655038   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:44:44.655067   68004 ubuntu.go:182] provisioning hostname "ha-608611"
	I1009 18:44:44.655128   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.673982   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.674208   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.674222   68004 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-608611 && echo "ha-608611" | sudo tee /etc/hostname
	I1009 18:44:44.830321   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:44:44.830415   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.848252   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.848464   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.848481   68004 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-608611' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-608611/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-608611' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:44:44.995953   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:44:44.995980   68004 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 18:44:44.995996   68004 ubuntu.go:190] setting up certificates
	I1009 18:44:44.996004   68004 provision.go:84] configureAuth start
	I1009 18:44:44.996061   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.014319   68004 provision.go:143] copyHostCerts
	I1009 18:44:45.014359   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:44:45.014401   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 18:44:45.014411   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:44:45.014491   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 18:44:45.014585   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:44:45.014614   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 18:44:45.014624   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:44:45.014668   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 18:44:45.014744   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:44:45.014769   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 18:44:45.014773   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:44:45.014812   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 18:44:45.014890   68004 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.ha-608611 san=[127.0.0.1 192.168.49.2 ha-608611 localhost minikube]
	I1009 18:44:45.062086   68004 provision.go:177] copyRemoteCerts
	I1009 18:44:45.062191   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:44:45.062224   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.079568   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.182503   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 18:44:45.182590   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 18:44:45.201898   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 18:44:45.201952   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 18:44:45.219004   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 18:44:45.219061   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:44:45.236354   68004 provision.go:87] duration metric: took 240.321663ms to configureAuth
	I1009 18:44:45.236386   68004 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:44:45.236591   68004 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:44:45.236715   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.255084   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:45.255329   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:45.255352   68004 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:44:45.508555   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:44:45.508584   68004 machine.go:96] duration metric: took 1.01992839s to provisionDockerMachine
	I1009 18:44:45.508595   68004 client.go:171] duration metric: took 6.817674141s to LocalClient.Create
	I1009 18:44:45.508615   68004 start.go:167] duration metric: took 6.817737923s to libmachine.API.Create "ha-608611"
	I1009 18:44:45.508627   68004 start.go:293] postStartSetup for "ha-608611" (driver="docker")
	I1009 18:44:45.508641   68004 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:44:45.508698   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:44:45.508733   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.526223   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.630313   68004 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:44:45.633862   68004 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:44:45.633886   68004 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:44:45.633896   68004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 18:44:45.633937   68004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 18:44:45.634010   68004 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 18:44:45.634020   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /etc/ssl/certs/148802.pem
	I1009 18:44:45.634128   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:44:45.641735   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:44:45.661588   68004 start.go:296] duration metric: took 152.943683ms for postStartSetup
	I1009 18:44:45.661893   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.680048   68004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:44:45.680316   68004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:44:45.680352   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.696877   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.796243   68004 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:44:45.800700   68004 start.go:128] duration metric: took 7.112375109s to createHost
	I1009 18:44:45.800729   68004 start.go:83] releasing machines lock for "ha-608611", held for 7.112518345s
	I1009 18:44:45.800791   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.818595   68004 ssh_runner.go:195] Run: cat /version.json
	I1009 18:44:45.818630   68004 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:44:45.818641   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.818688   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.836603   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.836837   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.989177   68004 ssh_runner.go:195] Run: systemctl --version
	I1009 18:44:45.995896   68004 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:44:46.030619   68004 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:44:46.035429   68004 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:44:46.035494   68004 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:44:46.061922   68004 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 18:44:46.061944   68004 start.go:495] detecting cgroup driver to use...
	I1009 18:44:46.061975   68004 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:44:46.062026   68004 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:44:46.077423   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:44:46.089316   68004 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:44:46.089367   68004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:44:46.105696   68004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:44:46.122777   68004 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:44:46.202639   68004 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:44:46.294647   68004 docker.go:234] disabling docker service ...
	I1009 18:44:46.294704   68004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:44:46.312549   68004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:44:46.324800   68004 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:44:46.403433   68004 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:44:46.481222   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:44:46.493645   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:44:46.507931   68004 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:44:46.507979   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.518504   68004 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 18:44:46.518561   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.527328   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.535888   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.544437   68004 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:44:46.552112   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.560275   68004 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.573155   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.581642   68004 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:44:46.588485   68004 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:44:46.595486   68004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:44:46.674187   68004 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:44:46.778236   68004 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:44:46.778294   68004 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:44:46.782264   68004 start.go:563] Will wait 60s for crictl version
	I1009 18:44:46.782319   68004 ssh_runner.go:195] Run: which crictl
	I1009 18:44:46.785887   68004 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:44:46.809717   68004 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:44:46.809792   68004 ssh_runner.go:195] Run: crio --version
	I1009 18:44:46.837446   68004 ssh_runner.go:195] Run: crio --version
	I1009 18:44:46.867516   68004 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:44:46.869002   68004 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:44:46.886298   68004 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:44:46.890354   68004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:44:46.901206   68004 kubeadm.go:883] updating cluster {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:44:46.901331   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:46.901390   68004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:44:46.933183   68004 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:44:46.933203   68004 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:44:46.933255   68004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:44:46.959025   68004 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:44:46.959053   68004 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:44:46.959062   68004 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 18:44:46.959174   68004 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-608611 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:44:46.959248   68004 ssh_runner.go:195] Run: crio config
	I1009 18:44:47.005223   68004 cni.go:84] Creating CNI manager for ""
	I1009 18:44:47.005245   68004 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 18:44:47.005269   68004 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:44:47.005302   68004 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-608611 NodeName:ha-608611 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:44:47.005420   68004 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-608611"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:44:47.005441   68004 kube-vip.go:115] generating kube-vip config ...
	I1009 18:44:47.005483   68004 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 18:44:47.017646   68004 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:44:47.017751   68004 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1009 18:44:47.017813   68004 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:44:47.025763   68004 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:44:47.025815   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 18:44:47.033769   68004 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 18:44:47.046390   68004 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:44:47.062352   68004 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 18:44:47.075248   68004 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1009 18:44:47.090154   68004 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 18:44:47.093985   68004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:44:47.104234   68004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:44:47.185443   68004 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:44:47.207477   68004 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611 for IP: 192.168.49.2
	I1009 18:44:47.207503   68004 certs.go:195] generating shared ca certs ...
	I1009 18:44:47.207525   68004 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.207676   68004 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 18:44:47.207726   68004 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 18:44:47.207736   68004 certs.go:257] generating profile certs ...
	I1009 18:44:47.207784   68004 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key
	I1009 18:44:47.207802   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt with IP's: []
	I1009 18:44:47.296415   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt ...
	I1009 18:44:47.296444   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt: {Name:mka7495c49ff81b322387640c5f8be05bb8b97aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.296615   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key ...
	I1009 18:44:47.296627   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key: {Name:mk151a9783426d352762013576861912ee213cd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.296698   68004 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3
	I1009 18:44:47.296712   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1009 18:44:47.614912   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 ...
	I1009 18:44:47.614937   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3: {Name:mkf40b70da82ca6969886952002da4a653b30ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.615095   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3 ...
	I1009 18:44:47.615110   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3: {Name:mkd83b705c3cec74b71d7424d9484d8c52a44a8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.615192   68004 certs.go:382] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt
	I1009 18:44:47.615283   68004 certs.go:386] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3 -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key
	I1009 18:44:47.615388   68004 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key
	I1009 18:44:47.615408   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt with IP's: []
	I1009 18:44:47.855559   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt ...
	I1009 18:44:47.855590   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt: {Name:mkb45be1e91a0e10b00b60bd353288b3ec0a365b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.855750   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key ...
	I1009 18:44:47.855762   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key: {Name:mk173c05f4fc9659f1f76c6f2e2f3e956fd65bbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.855826   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 18:44:47.855839   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 18:44:47.855850   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 18:44:47.855863   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 18:44:47.855878   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 18:44:47.855890   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 18:44:47.855902   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 18:44:47.855914   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 18:44:47.855955   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 18:44:47.855989   68004 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 18:44:47.855998   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:44:47.856027   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 18:44:47.856050   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:44:47.856071   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 18:44:47.856108   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:44:47.856132   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:47.856159   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem -> /usr/share/ca-certificates/14880.pem
	I1009 18:44:47.856171   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /usr/share/ca-certificates/148802.pem
	I1009 18:44:47.856652   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:44:47.875170   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:44:47.892939   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:44:47.910593   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:44:47.927971   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 18:44:47.945367   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:44:47.962453   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:44:47.979768   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:44:47.996498   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:44:48.015667   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 18:44:48.032775   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 18:44:48.049777   68004 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:44:48.062232   68004 ssh_runner.go:195] Run: openssl version
	I1009 18:44:48.068333   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 18:44:48.076746   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.080306   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.080361   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.114497   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:44:48.123987   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:44:48.134109   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.138265   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.138325   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.173947   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:44:48.182505   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 18:44:48.190879   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.194449   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.194493   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.227813   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 18:44:48.236520   68004 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:44:48.239954   68004 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 18:44:48.240015   68004 kubeadm.go:400] StartCluster: {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:44:48.240093   68004 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:44:48.240133   68004 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:44:48.266457   68004 cri.go:89] found id: ""
	I1009 18:44:48.266520   68004 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:44:48.274981   68004 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:44:48.282927   68004 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:44:48.282975   68004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:44:48.290558   68004 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:44:48.290617   68004 kubeadm.go:157] found existing configuration files:
	
	I1009 18:44:48.290662   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:44:48.297883   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:44:48.297940   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:44:48.305298   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:44:48.312630   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:44:48.312685   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:44:48.320277   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:44:48.328028   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:44:48.328075   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:44:48.335714   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:44:48.343631   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:44:48.343682   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:44:48.351389   68004 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:44:48.409985   68004 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:44:48.468687   68004 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:48:52.176412   68004 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1009 18:48:52.176606   68004 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:48:52.179343   68004 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:48:52.179469   68004 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:48:52.179692   68004 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:48:52.179825   68004 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:48:52.179919   68004 kubeadm.go:318] OS: Linux
	I1009 18:48:52.180033   68004 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:48:52.180167   68004 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:48:52.180261   68004 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:48:52.180339   68004 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:48:52.180423   68004 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:48:52.180506   68004 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:48:52.180585   68004 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:48:52.180650   68004 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:48:52.180730   68004 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:48:52.180858   68004 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:48:52.181038   68004 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:48:52.181129   68004 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:48:52.183066   68004 out.go:252]   - Generating certificates and keys ...
	I1009 18:48:52.183199   68004 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:48:52.183278   68004 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:48:52.183337   68004 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 18:48:52.183388   68004 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 18:48:52.183456   68004 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 18:48:52.183531   68004 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 18:48:52.183609   68004 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 18:48:52.183734   68004 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:48:52.183814   68004 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 18:48:52.183946   68004 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:48:52.184022   68004 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 18:48:52.184077   68004 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 18:48:52.184120   68004 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 18:48:52.184209   68004 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:48:52.184289   68004 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:48:52.184373   68004 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:48:52.184446   68004 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:48:52.184545   68004 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:48:52.184650   68004 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:48:52.184751   68004 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:48:52.184845   68004 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:48:52.187212   68004 out.go:252]   - Booting up control plane ...
	I1009 18:48:52.187314   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:48:52.187403   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:48:52.187495   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:48:52.187618   68004 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:48:52.187764   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:48:52.187905   68004 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:48:52.188016   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:48:52.188092   68004 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:48:52.188271   68004 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:48:52.188367   68004 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:48:52.188438   68004 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001064091s
	I1009 18:48:52.188532   68004 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:48:52.188631   68004 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 18:48:52.188753   68004 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:48:52.188835   68004 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:48:52.188944   68004 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00065849s
	I1009 18:48:52.189053   68004 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000822023s
	I1009 18:48:52.189176   68004 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00103559s
	I1009 18:48:52.189186   68004 kubeadm.go:318] 
	I1009 18:48:52.189288   68004 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:48:52.189417   68004 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:48:52.189507   68004 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:48:52.189604   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:48:52.189710   68004 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:48:52.189827   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:48:52.189851   68004 kubeadm.go:318] 
	W1009 18:48:52.189997   68004 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001064091s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00065849s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000822023s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00103559s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 18:48:52.190074   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 18:48:54.957990   68004 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.767888592s)
	I1009 18:48:54.958062   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:48:54.971165   68004 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:48:54.971216   68004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:48:54.979630   68004 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:48:54.979649   68004 kubeadm.go:157] found existing configuration files:
	
	I1009 18:48:54.979696   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:48:54.987819   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:48:54.987884   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:48:54.995953   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:48:55.003976   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:48:55.004081   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:48:55.011851   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:48:55.019991   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:48:55.020043   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:48:55.027959   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:48:55.036070   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:48:55.036117   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:48:55.043823   68004 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:48:55.102132   68004 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:48:55.161990   68004 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:52:58.820119   68004 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 18:52:58.820247   68004 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:52:58.823463   68004 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:52:58.823551   68004 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:52:58.823686   68004 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:52:58.823770   68004 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:52:58.823834   68004 kubeadm.go:318] OS: Linux
	I1009 18:52:58.823882   68004 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:52:58.823967   68004 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:52:58.824039   68004 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:52:58.824112   68004 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:52:58.824209   68004 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:52:58.824278   68004 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:52:58.824339   68004 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:52:58.824385   68004 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:52:58.824446   68004 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:52:58.824525   68004 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:52:58.824621   68004 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:52:58.824718   68004 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:52:58.828177   68004 out.go:252]   - Generating certificates and keys ...
	I1009 18:52:58.828267   68004 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:52:58.828359   68004 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:52:58.828476   68004 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 18:52:58.828530   68004 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 18:52:58.828586   68004 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 18:52:58.828629   68004 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 18:52:58.828684   68004 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 18:52:58.828737   68004 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 18:52:58.828800   68004 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 18:52:58.828859   68004 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 18:52:58.828890   68004 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 18:52:58.828973   68004 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:52:58.829058   68004 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:52:58.829168   68004 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:52:58.829228   68004 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:52:58.829307   68004 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:52:58.829375   68004 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:52:58.829446   68004 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:52:58.829507   68004 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:52:58.830918   68004 out.go:252]   - Booting up control plane ...
	I1009 18:52:58.831004   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:52:58.831088   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:52:58.831162   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:52:58.831271   68004 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:52:58.831374   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:52:58.831475   68004 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:52:58.831547   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:52:58.831602   68004 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:52:58.831715   68004 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:52:58.831812   68004 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:52:58.831876   68004 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000946171s
	I1009 18:52:58.831960   68004 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:52:58.832028   68004 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 18:52:58.832113   68004 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:52:58.832207   68004 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:52:58.832277   68004 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	I1009 18:52:58.832347   68004 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	I1009 18:52:58.832422   68004 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	I1009 18:52:58.832428   68004 kubeadm.go:318] 
	I1009 18:52:58.832506   68004 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:52:58.832579   68004 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:52:58.832656   68004 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:52:58.832741   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:52:58.832805   68004 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:52:58.832888   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:52:58.832970   68004 kubeadm.go:402] duration metric: took 8m10.592960723s to StartCluster
	I1009 18:52:58.832981   68004 kubeadm.go:318] 
	I1009 18:52:58.833031   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:52:58.833085   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:52:58.861225   68004 cri.go:89] found id: ""
	I1009 18:52:58.861266   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.861281   68004 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:52:58.861287   68004 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:52:58.861341   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:52:58.888167   68004 cri.go:89] found id: ""
	I1009 18:52:58.888195   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.888205   68004 logs.go:284] No container was found matching "etcd"
	I1009 18:52:58.888212   68004 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:52:58.888287   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:52:58.914349   68004 cri.go:89] found id: ""
	I1009 18:52:58.914374   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.914384   68004 logs.go:284] No container was found matching "coredns"
	I1009 18:52:58.914390   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:52:58.914453   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:52:58.940856   68004 cri.go:89] found id: ""
	I1009 18:52:58.940884   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.940892   68004 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:52:58.940898   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:52:58.940949   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:52:58.967634   68004 cri.go:89] found id: ""
	I1009 18:52:58.967660   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.967668   68004 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:52:58.967675   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:52:58.967737   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:52:58.994857   68004 cri.go:89] found id: ""
	I1009 18:52:58.994884   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.994892   68004 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:52:58.994897   68004 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:52:58.994951   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:52:59.022250   68004 cri.go:89] found id: ""
	I1009 18:52:59.022280   68004 logs.go:282] 0 containers: []
	W1009 18:52:59.022296   68004 logs.go:284] No container was found matching "kindnet"
	I1009 18:52:59.022305   68004 logs.go:123] Gathering logs for container status ...
	I1009 18:52:59.022316   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:52:59.050362   68004 logs.go:123] Gathering logs for kubelet ...
	I1009 18:52:59.050466   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:52:59.114521   68004 logs.go:123] Gathering logs for dmesg ...
	I1009 18:52:59.114560   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:52:59.126721   68004 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:52:59.126746   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:52:59.184497   68004 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:52:59.177217    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.177807    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179451    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179888    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.181458    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:52:59.177217    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.177807    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179451    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179888    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.181458    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:52:59.184526   68004 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:52:59.184536   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1009 18:52:59.243650   68004 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 18:52:59.243716   68004 out.go:285] * 
	W1009 18:52:59.243784   68004 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:52:59.243799   68004 out.go:285] * 
	W1009 18:52:59.245479   68004 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:52:59.249165   68004 out.go:203] 
	W1009 18:52:59.250590   68004 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:52:59.250620   68004 out.go:285] * 
	I1009 18:52:59.252112   68004 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.465420508Z" level=info msg="createCtr: removing container 83743aebcddc36aef5c02af3dcd233f5d07925ba9d0281ad1316ac7a648aa44c" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.465459824Z" level=info msg="createCtr: deleting container 83743aebcddc36aef5c02af3dcd233f5d07925ba9d0281ad1316ac7a648aa44c from storage" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.467757138Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-608611_kube-system_b479c8e1034fd1754049af8325a8c50b_0" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.441485805Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=30142e19-bbd7-4eb1-b9bc-3f7fd8b15d13 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.442431482Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=bc06eb87-f8e1-4752-90ce-f306d71bb12c name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.443389229Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-608611/kube-apiserver" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.443682696Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.446968447Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.447385153Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.460272538Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.461764017Z" level=info msg="createCtr: deleting container ID c4531b33398cdc11b3df5f5c569221cb658215b7f587bf4d85d9449bd3ddd90e from idIndex" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.461810281Z" level=info msg="createCtr: removing container c4531b33398cdc11b3df5f5c569221cb658215b7f587bf4d85d9449bd3ddd90e" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.461842736Z" level=info msg="createCtr: deleting container c4531b33398cdc11b3df5f5c569221cb658215b7f587bf4d85d9449bd3ddd90e from storage" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.464060722Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-608611_kube-system_8c1c5aee1432fcfd0e6519753fb0d668_0" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:50 ha-608611 crio[779]: time="2025-10-09T18:54:50.441897275Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=37fdcf26-03b8-4707-8e57-5bd33d5c3faf name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:50 ha-608611 crio[779]: time="2025-10-09T18:54:50.442937481Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=a0cd1df6-f066-4772-8705-da945a5e1c2b name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:50 ha-608611 crio[779]: time="2025-10-09T18:54:50.44398635Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-608611/kube-scheduler" id=dc8cbb58-5900-408c-be85-4cd843d20f35 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:50 ha-608611 crio[779]: time="2025-10-09T18:54:50.444258563Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:50 ha-608611 crio[779]: time="2025-10-09T18:54:50.448175184Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:50 ha-608611 crio[779]: time="2025-10-09T18:54:50.44868377Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:50 ha-608611 crio[779]: time="2025-10-09T18:54:50.46115671Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=dc8cbb58-5900-408c-be85-4cd843d20f35 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:50 ha-608611 crio[779]: time="2025-10-09T18:54:50.462579212Z" level=info msg="createCtr: deleting container ID efabd6faa594ba994f004d753bf838a65d97ccfb9156d81580b0d724e625f762 from idIndex" id=dc8cbb58-5900-408c-be85-4cd843d20f35 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:50 ha-608611 crio[779]: time="2025-10-09T18:54:50.462627342Z" level=info msg="createCtr: removing container efabd6faa594ba994f004d753bf838a65d97ccfb9156d81580b0d724e625f762" id=dc8cbb58-5900-408c-be85-4cd843d20f35 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:50 ha-608611 crio[779]: time="2025-10-09T18:54:50.462670027Z" level=info msg="createCtr: deleting container efabd6faa594ba994f004d753bf838a65d97ccfb9156d81580b0d724e625f762 from storage" id=dc8cbb58-5900-408c-be85-4cd843d20f35 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:50 ha-608611 crio[779]: time="2025-10-09T18:54:50.465117034Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-608611_kube-system_aa829d6ea417a48ecaa6f5cad3254d94_0" id=dc8cbb58-5900-408c-be85-4cd843d20f35 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:54:51.004322    4034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:51.005378    4034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:51.005805    4034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:51.007433    4034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:51.007839    4034 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:54:51 up  1:37,  0 user,  load average: 0.48, 0.15, 0.11
	Linux ha-608611 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 18:54:41 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:54:41 ha-608611 kubelet[1930]: E1009 18:54:41.468280    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-608611" podUID="b479c8e1034fd1754049af8325a8c50b"
	Oct 09 18:54:45 ha-608611 kubelet[1930]: E1009 18:54:45.440984    1930 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 18:54:45 ha-608611 kubelet[1930]: E1009 18:54:45.464410    1930 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:54:45 ha-608611 kubelet[1930]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:45 ha-608611 kubelet[1930]:  > podSandboxID="3ed86e3854bad44d01adb07f49466fff61fdf9dd10f223587d539b2547828b70"
	Oct 09 18:54:45 ha-608611 kubelet[1930]: E1009 18:54:45.464511    1930 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:54:45 ha-608611 kubelet[1930]:         container kube-apiserver start failed in pod kube-apiserver-ha-608611_kube-system(8c1c5aee1432fcfd0e6519753fb0d668): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:45 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:54:45 ha-608611 kubelet[1930]: E1009 18:54:45.464543    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-608611" podUID="8c1c5aee1432fcfd0e6519753fb0d668"
	Oct 09 18:54:46 ha-608611 kubelet[1930]: E1009 18:54:46.045748    1930 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 09 18:54:46 ha-608611 kubelet[1930]: E1009 18:54:46.152695    1930 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-608611.186ce72dd5388d27  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-608611,UID:ha-608611,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-608611 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-608611,},FirstTimestamp:2025-10-09 18:48:58.431819047 +0000 UTC m=+0.618197321,LastTimestamp:2025-10-09 18:48:58.431819047 +0000 UTC m=+0.618197321,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-608611,}"
	Oct 09 18:54:47 ha-608611 kubelet[1930]: E1009 18:54:47.081114    1930 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-608611?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 18:54:47 ha-608611 kubelet[1930]: I1009 18:54:47.250003    1930 kubelet_node_status.go:75] "Attempting to register node" node="ha-608611"
	Oct 09 18:54:47 ha-608611 kubelet[1930]: E1009 18:54:47.250375    1930 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-608611"
	Oct 09 18:54:48 ha-608611 kubelet[1930]: E1009 18:54:48.459131    1930 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-608611\" not found"
	Oct 09 18:54:49 ha-608611 kubelet[1930]: E1009 18:54:49.995398    1930 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 09 18:54:50 ha-608611 kubelet[1930]: E1009 18:54:50.441450    1930 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 18:54:50 ha-608611 kubelet[1930]: E1009 18:54:50.465491    1930 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:54:50 ha-608611 kubelet[1930]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:50 ha-608611 kubelet[1930]:  > podSandboxID="770c3dd955a8e4513f9e5b862a3cb7f1d4ff6ebd095626539e3d2eb18ba246dc"
	Oct 09 18:54:50 ha-608611 kubelet[1930]: E1009 18:54:50.465606    1930 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:54:50 ha-608611 kubelet[1930]:         container kube-scheduler start failed in pod kube-scheduler-ha-608611_kube-system(aa829d6ea417a48ecaa6f5cad3254d94): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:50 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:54:50 ha-608611 kubelet[1930]: E1009 18:54:50.465644    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-608611" podUID="aa829d6ea417a48ecaa6f5cad3254d94"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611: exit status 6 (292.259173ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:54:51.381938   78051 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-608611" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (1.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-608611" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-608611\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-608611\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-608611\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":nul
l,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list
--output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-608611
helpers_test.go:243: (dbg) docker inspect ha-608611:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	        "Created": "2025-10-09T18:44:43.71277862Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 68571,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:44:43.760299717Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hostname",
	        "HostsPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hosts",
	        "LogPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c-json.log",
	        "Name": "/ha-608611",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-608611:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-608611",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	                "LowerDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-608611",
	                "Source": "/var/lib/docker/volumes/ha-608611/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-608611",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-608611",
	                "name.minikube.sigs.k8s.io": "ha-608611",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4f6557069285c9379d4788b404b85a7f7332b0f0915fb426eb2d3ffb6f02df65",
	            "SandboxKey": "/var/run/docker/netns/4f6557069285",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-608611": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:dc:55:21:78:3f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d41ad8abecfe5e57fea462a2d7f6665aa3879de8bfc3fe0269f712186c14e257",
	                    "EndpointID": "322add21e309d24bef79b6b7f428ea8a1994c3d46e02d36bb4debf9950e6c0a5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-608611",
	                        "92fc23109156"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611: exit status 6 (287.52959ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:54:51.996661   78299 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image          │ functional-753440 image ls                                                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ delete         │ -p functional-753440                                                                                            │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:44 UTC │ 09 Oct 25 18:44 UTC │
	│ start          │ ha-608611 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:44 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- rollout status deployment/busybox                                                          │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node           │ ha-608611 node add --alsologtostderr -v 5                                                                       │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node           │ ha-608611 node stop m02 --alsologtostderr -v 5                                                                  │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:44:38
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:44:38.499708   68004 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:44:38.499979   68004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:44:38.499990   68004 out.go:374] Setting ErrFile to fd 2...
	I1009 18:44:38.499995   68004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:44:38.500193   68004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:44:38.500672   68004 out.go:368] Setting JSON to false
	I1009 18:44:38.501534   68004 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5226,"bootTime":1760030252,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:44:38.501651   68004 start.go:141] virtualization: kvm guest
	I1009 18:44:38.503753   68004 out.go:179] * [ha-608611] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:44:38.505161   68004 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:44:38.505174   68004 notify.go:220] Checking for updates...
	I1009 18:44:38.507971   68004 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:44:38.509361   68004 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:44:38.510823   68004 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:44:38.512241   68004 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:44:38.513815   68004 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:44:38.515465   68004 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:44:38.539241   68004 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:44:38.539344   68004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:44:38.597491   68004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:44:38.585969456 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:44:38.597607   68004 docker.go:318] overlay module found
	I1009 18:44:38.599712   68004 out.go:179] * Using the docker driver based on user configuration
	I1009 18:44:38.601190   68004 start.go:305] selected driver: docker
	I1009 18:44:38.601208   68004 start.go:925] validating driver "docker" against <nil>
	I1009 18:44:38.601220   68004 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:44:38.601773   68004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:44:38.656624   68004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:44:38.646723999 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:44:38.656772   68004 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 18:44:38.656973   68004 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:44:38.658777   68004 out.go:179] * Using Docker driver with root privileges
	I1009 18:44:38.660475   68004 cni.go:84] Creating CNI manager for ""
	I1009 18:44:38.660538   68004 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 18:44:38.660548   68004 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:44:38.660625   68004 start.go:349] cluster config:
	{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1009 18:44:38.662228   68004 out.go:179] * Starting "ha-608611" primary control-plane node in "ha-608611" cluster
	I1009 18:44:38.663758   68004 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:44:38.665163   68004 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:44:38.666518   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:38.666553   68004 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:44:38.666561   68004 cache.go:64] Caching tarball of preloaded images
	I1009 18:44:38.666652   68004 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:44:38.666665   68004 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:44:38.666636   68004 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:44:38.667052   68004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:44:38.667080   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json: {Name:mk7eb36c0f629760ce25ed6ea0be36fe97501d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:38.687956   68004 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 18:44:38.687977   68004 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 18:44:38.687999   68004 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:44:38.688029   68004 start.go:360] acquireMachinesLock for ha-608611: {Name:mk7579977ab708dc80cadd5f1683dbd9d0a08d4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:44:38.688196   68004 start.go:364] duration metric: took 118.358µs to acquireMachinesLock for "ha-608611"
	I1009 18:44:38.688228   68004 start.go:93] Provisioning new machine with config: &{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:44:38.688308   68004 start.go:125] createHost starting for "" (driver="docker")
	I1009 18:44:38.690596   68004 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 18:44:38.690877   68004 start.go:159] libmachine.API.Create for "ha-608611" (driver="docker")
	I1009 18:44:38.690915   68004 client.go:168] LocalClient.Create starting
	I1009 18:44:38.691016   68004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem
	I1009 18:44:38.691065   68004 main.go:141] libmachine: Decoding PEM data...
	I1009 18:44:38.691090   68004 main.go:141] libmachine: Parsing certificate...
	I1009 18:44:38.691160   68004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem
	I1009 18:44:38.691207   68004 main.go:141] libmachine: Decoding PEM data...
	I1009 18:44:38.691219   68004 main.go:141] libmachine: Parsing certificate...
	I1009 18:44:38.691649   68004 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:44:38.708961   68004 cli_runner.go:211] docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:44:38.709049   68004 network_create.go:284] running [docker network inspect ha-608611] to gather additional debugging logs...
	I1009 18:44:38.709068   68004 cli_runner.go:164] Run: docker network inspect ha-608611
	W1009 18:44:38.724919   68004 cli_runner.go:211] docker network inspect ha-608611 returned with exit code 1
	I1009 18:44:38.724948   68004 network_create.go:287] error running [docker network inspect ha-608611]: docker network inspect ha-608611: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-608611 not found
	I1009 18:44:38.724959   68004 network_create.go:289] output of [docker network inspect ha-608611]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-608611 not found
	
	** /stderr **
	I1009 18:44:38.725077   68004 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:44:38.743440   68004 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e06100}
	I1009 18:44:38.743492   68004 network_create.go:124] attempt to create docker network ha-608611 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 18:44:38.743548   68004 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-608611 ha-608611
	I1009 18:44:38.802772   68004 network_create.go:108] docker network ha-608611 192.168.49.0/24 created
	I1009 18:44:38.802822   68004 kic.go:121] calculated static IP "192.168.49.2" for the "ha-608611" container
	I1009 18:44:38.802881   68004 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:44:38.820080   68004 cli_runner.go:164] Run: docker volume create ha-608611 --label name.minikube.sigs.k8s.io=ha-608611 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:44:38.840522   68004 oci.go:103] Successfully created a docker volume ha-608611
	I1009 18:44:38.840615   68004 cli_runner.go:164] Run: docker run --rm --name ha-608611-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-608611 --entrypoint /usr/bin/test -v ha-608611:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 18:44:39.244353   68004 oci.go:107] Successfully prepared a docker volume ha-608611
	I1009 18:44:39.244424   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:39.244433   68004 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 18:44:39.244478   68004 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-608611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 18:44:43.640122   68004 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-608611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.39557595s)
	I1009 18:44:43.640175   68004 kic.go:203] duration metric: took 4.395736393s to extract preloaded images to volume ...
	W1009 18:44:43.640303   68004 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 18:44:43.640358   68004 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 18:44:43.640405   68004 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:44:43.696295   68004 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-608611 --name ha-608611 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-608611 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-608611 --network ha-608611 --ip 192.168.49.2 --volume ha-608611:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 18:44:43.979679   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Running}}
	I1009 18:44:43.998229   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.017435   68004 cli_runner.go:164] Run: docker exec ha-608611 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:44:44.066674   68004 oci.go:144] the created container "ha-608611" has a running status.
	I1009 18:44:44.066704   68004 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa...
	I1009 18:44:44.380025   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 18:44:44.380087   68004 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:44:44.405345   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.425476   68004 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:44:44.425501   68004 kic_runner.go:114] Args: [docker exec --privileged ha-608611 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:44:44.469260   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.488635   68004 machine.go:93] provisionDockerMachine start ...
	I1009 18:44:44.488729   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.507225   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.507570   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.507596   68004 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:44:44.655038   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:44:44.655067   68004 ubuntu.go:182] provisioning hostname "ha-608611"
	I1009 18:44:44.655128   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.673982   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.674208   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.674222   68004 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-608611 && echo "ha-608611" | sudo tee /etc/hostname
	I1009 18:44:44.830321   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:44:44.830415   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.848252   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.848464   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.848481   68004 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-608611' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-608611/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-608611' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:44:44.995953   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:44:44.995980   68004 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 18:44:44.995996   68004 ubuntu.go:190] setting up certificates
	I1009 18:44:44.996004   68004 provision.go:84] configureAuth start
	I1009 18:44:44.996061   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.014319   68004 provision.go:143] copyHostCerts
	I1009 18:44:45.014359   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:44:45.014401   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 18:44:45.014411   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:44:45.014491   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 18:44:45.014585   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:44:45.014614   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 18:44:45.014624   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:44:45.014668   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 18:44:45.014744   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:44:45.014769   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 18:44:45.014773   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:44:45.014812   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 18:44:45.014890   68004 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.ha-608611 san=[127.0.0.1 192.168.49.2 ha-608611 localhost minikube]
	I1009 18:44:45.062086   68004 provision.go:177] copyRemoteCerts
	I1009 18:44:45.062191   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:44:45.062224   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.079568   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.182503   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 18:44:45.182590   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 18:44:45.201898   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 18:44:45.201952   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 18:44:45.219004   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 18:44:45.219061   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:44:45.236354   68004 provision.go:87] duration metric: took 240.321663ms to configureAuth
	I1009 18:44:45.236386   68004 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:44:45.236591   68004 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:44:45.236715   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.255084   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:45.255329   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:45.255352   68004 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:44:45.508555   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:44:45.508584   68004 machine.go:96] duration metric: took 1.01992839s to provisionDockerMachine
	I1009 18:44:45.508595   68004 client.go:171] duration metric: took 6.817674141s to LocalClient.Create
	I1009 18:44:45.508615   68004 start.go:167] duration metric: took 6.817737923s to libmachine.API.Create "ha-608611"
	I1009 18:44:45.508627   68004 start.go:293] postStartSetup for "ha-608611" (driver="docker")
	I1009 18:44:45.508641   68004 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:44:45.508698   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:44:45.508733   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.526223   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.630313   68004 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:44:45.633862   68004 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:44:45.633886   68004 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:44:45.633896   68004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 18:44:45.633937   68004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 18:44:45.634010   68004 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 18:44:45.634020   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /etc/ssl/certs/148802.pem
	I1009 18:44:45.634128   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:44:45.641735   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:44:45.661588   68004 start.go:296] duration metric: took 152.943683ms for postStartSetup
	I1009 18:44:45.661893   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.680048   68004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:44:45.680316   68004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:44:45.680352   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.696877   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.796243   68004 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:44:45.800700   68004 start.go:128] duration metric: took 7.112375109s to createHost
	I1009 18:44:45.800729   68004 start.go:83] releasing machines lock for "ha-608611", held for 7.112518345s
	I1009 18:44:45.800791   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.818595   68004 ssh_runner.go:195] Run: cat /version.json
	I1009 18:44:45.818630   68004 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:44:45.818641   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.818688   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.836603   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.836837   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.989177   68004 ssh_runner.go:195] Run: systemctl --version
	I1009 18:44:45.995896   68004 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:44:46.030619   68004 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:44:46.035429   68004 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:44:46.035494   68004 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:44:46.061922   68004 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 18:44:46.061944   68004 start.go:495] detecting cgroup driver to use...
	I1009 18:44:46.061975   68004 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:44:46.062026   68004 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:44:46.077423   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:44:46.089316   68004 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:44:46.089367   68004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:44:46.105696   68004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:44:46.122777   68004 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:44:46.202639   68004 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:44:46.294647   68004 docker.go:234] disabling docker service ...
	I1009 18:44:46.294704   68004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:44:46.312549   68004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:44:46.324800   68004 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:44:46.403433   68004 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:44:46.481222   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:44:46.493645   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:44:46.507931   68004 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:44:46.507979   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.518504   68004 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 18:44:46.518561   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.527328   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.535888   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.544437   68004 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:44:46.552112   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.560275   68004 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.573155   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.581642   68004 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:44:46.588485   68004 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:44:46.595486   68004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:44:46.674187   68004 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:44:46.778236   68004 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:44:46.778294   68004 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:44:46.782264   68004 start.go:563] Will wait 60s for crictl version
	I1009 18:44:46.782319   68004 ssh_runner.go:195] Run: which crictl
	I1009 18:44:46.785887   68004 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:44:46.809717   68004 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:44:46.809792   68004 ssh_runner.go:195] Run: crio --version
	I1009 18:44:46.837446   68004 ssh_runner.go:195] Run: crio --version
	I1009 18:44:46.867516   68004 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:44:46.869002   68004 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:44:46.886298   68004 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:44:46.890354   68004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:44:46.901206   68004 kubeadm.go:883] updating cluster {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:44:46.901331   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:46.901390   68004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:44:46.933183   68004 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:44:46.933203   68004 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:44:46.933255   68004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:44:46.959025   68004 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:44:46.959053   68004 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:44:46.959062   68004 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 18:44:46.959174   68004 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-608611 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:44:46.959248   68004 ssh_runner.go:195] Run: crio config
	I1009 18:44:47.005223   68004 cni.go:84] Creating CNI manager for ""
	I1009 18:44:47.005245   68004 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 18:44:47.005269   68004 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:44:47.005302   68004 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-608611 NodeName:ha-608611 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:44:47.005420   68004 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-608611"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:44:47.005441   68004 kube-vip.go:115] generating kube-vip config ...
	I1009 18:44:47.005483   68004 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 18:44:47.017646   68004 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:44:47.017751   68004 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1009 18:44:47.017813   68004 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:44:47.025763   68004 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:44:47.025815   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 18:44:47.033769   68004 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 18:44:47.046390   68004 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:44:47.062352   68004 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 18:44:47.075248   68004 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1009 18:44:47.090154   68004 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 18:44:47.093985   68004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:44:47.104234   68004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:44:47.185443   68004 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:44:47.207477   68004 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611 for IP: 192.168.49.2
	I1009 18:44:47.207503   68004 certs.go:195] generating shared ca certs ...
	I1009 18:44:47.207525   68004 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.207676   68004 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 18:44:47.207726   68004 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 18:44:47.207736   68004 certs.go:257] generating profile certs ...
	I1009 18:44:47.207784   68004 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key
	I1009 18:44:47.207802   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt with IP's: []
	I1009 18:44:47.296415   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt ...
	I1009 18:44:47.296444   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt: {Name:mka7495c49ff81b322387640c5f8be05bb8b97aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.296615   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key ...
	I1009 18:44:47.296627   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key: {Name:mk151a9783426d352762013576861912ee213cd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.296698   68004 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3
	I1009 18:44:47.296712   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1009 18:44:47.614912   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 ...
	I1009 18:44:47.614937   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3: {Name:mkf40b70da82ca6969886952002da4a653b30ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.615095   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3 ...
	I1009 18:44:47.615110   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3: {Name:mkd83b705c3cec74b71d7424d9484d8c52a44a8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.615192   68004 certs.go:382] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt
	I1009 18:44:47.615283   68004 certs.go:386] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3 -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key
	I1009 18:44:47.615388   68004 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key
	I1009 18:44:47.615408   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt with IP's: []
	I1009 18:44:47.855559   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt ...
	I1009 18:44:47.855590   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt: {Name:mkb45be1e91a0e10b00b60bd353288b3ec0a365b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.855750   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key ...
	I1009 18:44:47.855762   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key: {Name:mk173c05f4fc9659f1f76c6f2e2f3e956fd65bbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.855826   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 18:44:47.855839   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 18:44:47.855850   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 18:44:47.855863   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 18:44:47.855878   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 18:44:47.855890   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 18:44:47.855902   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 18:44:47.855914   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 18:44:47.855955   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 18:44:47.855989   68004 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 18:44:47.855998   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:44:47.856027   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 18:44:47.856050   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:44:47.856071   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 18:44:47.856108   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:44:47.856132   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:47.856159   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem -> /usr/share/ca-certificates/14880.pem
	I1009 18:44:47.856171   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /usr/share/ca-certificates/148802.pem
	I1009 18:44:47.856652   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:44:47.875170   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:44:47.892939   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:44:47.910593   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:44:47.927971   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 18:44:47.945367   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:44:47.962453   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:44:47.979768   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:44:47.996498   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:44:48.015667   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 18:44:48.032775   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 18:44:48.049777   68004 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:44:48.062232   68004 ssh_runner.go:195] Run: openssl version
	I1009 18:44:48.068333   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 18:44:48.076746   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.080306   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.080361   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.114497   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:44:48.123987   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:44:48.134109   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.138265   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.138325   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.173947   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:44:48.182505   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 18:44:48.190879   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.194449   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.194493   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.227813   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 18:44:48.236520   68004 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:44:48.239954   68004 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 18:44:48.240015   68004 kubeadm.go:400] StartCluster: {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:44:48.240093   68004 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:44:48.240133   68004 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:44:48.266457   68004 cri.go:89] found id: ""
	I1009 18:44:48.266520   68004 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:44:48.274981   68004 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:44:48.282927   68004 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:44:48.282975   68004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:44:48.290558   68004 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:44:48.290617   68004 kubeadm.go:157] found existing configuration files:
	
	I1009 18:44:48.290662   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:44:48.297883   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:44:48.297940   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:44:48.305298   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:44:48.312630   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:44:48.312685   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:44:48.320277   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:44:48.328028   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:44:48.328075   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:44:48.335714   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:44:48.343631   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:44:48.343682   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:44:48.351389   68004 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:44:48.409985   68004 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:44:48.468687   68004 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:48:52.176412   68004 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1009 18:48:52.176606   68004 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:48:52.179343   68004 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:48:52.179469   68004 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:48:52.179692   68004 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:48:52.179825   68004 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:48:52.179919   68004 kubeadm.go:318] OS: Linux
	I1009 18:48:52.180033   68004 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:48:52.180167   68004 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:48:52.180261   68004 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:48:52.180339   68004 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:48:52.180423   68004 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:48:52.180506   68004 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:48:52.180585   68004 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:48:52.180650   68004 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:48:52.180730   68004 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:48:52.180858   68004 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:48:52.181038   68004 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:48:52.181129   68004 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:48:52.183066   68004 out.go:252]   - Generating certificates and keys ...
	I1009 18:48:52.183199   68004 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:48:52.183278   68004 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:48:52.183337   68004 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 18:48:52.183388   68004 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 18:48:52.183456   68004 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 18:48:52.183531   68004 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 18:48:52.183609   68004 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 18:48:52.183734   68004 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:48:52.183814   68004 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 18:48:52.183946   68004 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:48:52.184022   68004 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 18:48:52.184077   68004 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 18:48:52.184120   68004 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 18:48:52.184209   68004 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:48:52.184289   68004 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:48:52.184373   68004 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:48:52.184446   68004 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:48:52.184545   68004 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:48:52.184650   68004 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:48:52.184751   68004 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:48:52.184845   68004 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:48:52.187212   68004 out.go:252]   - Booting up control plane ...
	I1009 18:48:52.187314   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:48:52.187403   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:48:52.187495   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:48:52.187618   68004 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:48:52.187764   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:48:52.187905   68004 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:48:52.188016   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:48:52.188092   68004 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:48:52.188271   68004 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:48:52.188367   68004 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:48:52.188438   68004 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001064091s
	I1009 18:48:52.188532   68004 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:48:52.188631   68004 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 18:48:52.188753   68004 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:48:52.188835   68004 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:48:52.188944   68004 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00065849s
	I1009 18:48:52.189053   68004 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000822023s
	I1009 18:48:52.189176   68004 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00103559s
	I1009 18:48:52.189186   68004 kubeadm.go:318] 
	I1009 18:48:52.189288   68004 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:48:52.189417   68004 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:48:52.189507   68004 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:48:52.189604   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:48:52.189710   68004 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:48:52.189827   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:48:52.189851   68004 kubeadm.go:318] 
	W1009 18:48:52.189997   68004 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001064091s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00065849s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000822023s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00103559s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 18:48:52.190074   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 18:48:54.957990   68004 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.767888592s)
	I1009 18:48:54.958062   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:48:54.971165   68004 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:48:54.971216   68004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:48:54.979630   68004 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:48:54.979649   68004 kubeadm.go:157] found existing configuration files:
	
	I1009 18:48:54.979696   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:48:54.987819   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:48:54.987884   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:48:54.995953   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:48:55.003976   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:48:55.004081   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:48:55.011851   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:48:55.019991   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:48:55.020043   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:48:55.027959   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:48:55.036070   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:48:55.036117   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:48:55.043823   68004 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:48:55.102132   68004 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:48:55.161990   68004 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:52:58.820119   68004 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 18:52:58.820247   68004 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:52:58.823463   68004 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:52:58.823551   68004 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:52:58.823686   68004 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:52:58.823770   68004 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:52:58.823834   68004 kubeadm.go:318] OS: Linux
	I1009 18:52:58.823882   68004 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:52:58.823967   68004 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:52:58.824039   68004 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:52:58.824112   68004 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:52:58.824209   68004 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:52:58.824278   68004 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:52:58.824339   68004 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:52:58.824385   68004 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:52:58.824446   68004 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:52:58.824525   68004 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:52:58.824621   68004 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:52:58.824718   68004 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:52:58.828177   68004 out.go:252]   - Generating certificates and keys ...
	I1009 18:52:58.828267   68004 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:52:58.828359   68004 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:52:58.828476   68004 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 18:52:58.828530   68004 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 18:52:58.828586   68004 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 18:52:58.828629   68004 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 18:52:58.828684   68004 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 18:52:58.828737   68004 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 18:52:58.828800   68004 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 18:52:58.828859   68004 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 18:52:58.828890   68004 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 18:52:58.828973   68004 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:52:58.829058   68004 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:52:58.829168   68004 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:52:58.829228   68004 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:52:58.829307   68004 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:52:58.829375   68004 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:52:58.829446   68004 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:52:58.829507   68004 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:52:58.830918   68004 out.go:252]   - Booting up control plane ...
	I1009 18:52:58.831004   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:52:58.831088   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:52:58.831162   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:52:58.831271   68004 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:52:58.831374   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:52:58.831475   68004 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:52:58.831547   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:52:58.831602   68004 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:52:58.831715   68004 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:52:58.831812   68004 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:52:58.831876   68004 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000946171s
	I1009 18:52:58.831960   68004 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:52:58.832028   68004 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 18:52:58.832113   68004 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:52:58.832207   68004 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:52:58.832277   68004 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	I1009 18:52:58.832347   68004 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	I1009 18:52:58.832422   68004 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	I1009 18:52:58.832428   68004 kubeadm.go:318] 
	I1009 18:52:58.832506   68004 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:52:58.832579   68004 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:52:58.832656   68004 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:52:58.832741   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:52:58.832805   68004 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:52:58.832888   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:52:58.832970   68004 kubeadm.go:402] duration metric: took 8m10.592960723s to StartCluster
	I1009 18:52:58.832981   68004 kubeadm.go:318] 
	I1009 18:52:58.833031   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:52:58.833085   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:52:58.861225   68004 cri.go:89] found id: ""
	I1009 18:52:58.861266   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.861281   68004 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:52:58.861287   68004 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:52:58.861341   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:52:58.888167   68004 cri.go:89] found id: ""
	I1009 18:52:58.888195   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.888205   68004 logs.go:284] No container was found matching "etcd"
	I1009 18:52:58.888212   68004 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:52:58.888287   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:52:58.914349   68004 cri.go:89] found id: ""
	I1009 18:52:58.914374   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.914384   68004 logs.go:284] No container was found matching "coredns"
	I1009 18:52:58.914390   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:52:58.914453   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:52:58.940856   68004 cri.go:89] found id: ""
	I1009 18:52:58.940884   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.940892   68004 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:52:58.940898   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:52:58.940949   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:52:58.967634   68004 cri.go:89] found id: ""
	I1009 18:52:58.967660   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.967668   68004 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:52:58.967675   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:52:58.967737   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:52:58.994857   68004 cri.go:89] found id: ""
	I1009 18:52:58.994884   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.994892   68004 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:52:58.994897   68004 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:52:58.994951   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:52:59.022250   68004 cri.go:89] found id: ""
	I1009 18:52:59.022280   68004 logs.go:282] 0 containers: []
	W1009 18:52:59.022296   68004 logs.go:284] No container was found matching "kindnet"
	I1009 18:52:59.022305   68004 logs.go:123] Gathering logs for container status ...
	I1009 18:52:59.022316   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:52:59.050362   68004 logs.go:123] Gathering logs for kubelet ...
	I1009 18:52:59.050466   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:52:59.114521   68004 logs.go:123] Gathering logs for dmesg ...
	I1009 18:52:59.114560   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:52:59.126721   68004 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:52:59.126746   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:52:59.184497   68004 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:52:59.177217    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.177807    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179451    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179888    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.181458    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:52:59.177217    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.177807    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179451    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179888    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.181458    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:52:59.184526   68004 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:52:59.184536   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1009 18:52:59.243650   68004 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 18:52:59.243716   68004 out.go:285] * 
	W1009 18:52:59.243784   68004 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:52:59.243799   68004 out.go:285] * 
	W1009 18:52:59.245479   68004 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:52:59.249165   68004 out.go:203] 
	W1009 18:52:59.250590   68004 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:52:59.250620   68004 out.go:285] * 
	I1009 18:52:59.252112   68004 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.465420508Z" level=info msg="createCtr: removing container 83743aebcddc36aef5c02af3dcd233f5d07925ba9d0281ad1316ac7a648aa44c" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.465459824Z" level=info msg="createCtr: deleting container 83743aebcddc36aef5c02af3dcd233f5d07925ba9d0281ad1316ac7a648aa44c from storage" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:41 ha-608611 crio[779]: time="2025-10-09T18:54:41.467757138Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-608611_kube-system_b479c8e1034fd1754049af8325a8c50b_0" id=8153e351-397b-4d26-8090-24d2b3631bd1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.441485805Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=30142e19-bbd7-4eb1-b9bc-3f7fd8b15d13 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.442431482Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=bc06eb87-f8e1-4752-90ce-f306d71bb12c name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.443389229Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-608611/kube-apiserver" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.443682696Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.446968447Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.447385153Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.460272538Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.461764017Z" level=info msg="createCtr: deleting container ID c4531b33398cdc11b3df5f5c569221cb658215b7f587bf4d85d9449bd3ddd90e from idIndex" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.461810281Z" level=info msg="createCtr: removing container c4531b33398cdc11b3df5f5c569221cb658215b7f587bf4d85d9449bd3ddd90e" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.461842736Z" level=info msg="createCtr: deleting container c4531b33398cdc11b3df5f5c569221cb658215b7f587bf4d85d9449bd3ddd90e from storage" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:45 ha-608611 crio[779]: time="2025-10-09T18:54:45.464060722Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-608611_kube-system_8c1c5aee1432fcfd0e6519753fb0d668_0" id=48bcacc5-d500-406f-b681-150dff61658f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:50 ha-608611 crio[779]: time="2025-10-09T18:54:50.441897275Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=37fdcf26-03b8-4707-8e57-5bd33d5c3faf name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:50 ha-608611 crio[779]: time="2025-10-09T18:54:50.442937481Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=a0cd1df6-f066-4772-8705-da945a5e1c2b name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:54:50 ha-608611 crio[779]: time="2025-10-09T18:54:50.44398635Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-608611/kube-scheduler" id=dc8cbb58-5900-408c-be85-4cd843d20f35 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:50 ha-608611 crio[779]: time="2025-10-09T18:54:50.444258563Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:50 ha-608611 crio[779]: time="2025-10-09T18:54:50.448175184Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:50 ha-608611 crio[779]: time="2025-10-09T18:54:50.44868377Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:54:50 ha-608611 crio[779]: time="2025-10-09T18:54:50.46115671Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=dc8cbb58-5900-408c-be85-4cd843d20f35 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:50 ha-608611 crio[779]: time="2025-10-09T18:54:50.462579212Z" level=info msg="createCtr: deleting container ID efabd6faa594ba994f004d753bf838a65d97ccfb9156d81580b0d724e625f762 from idIndex" id=dc8cbb58-5900-408c-be85-4cd843d20f35 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:50 ha-608611 crio[779]: time="2025-10-09T18:54:50.462627342Z" level=info msg="createCtr: removing container efabd6faa594ba994f004d753bf838a65d97ccfb9156d81580b0d724e625f762" id=dc8cbb58-5900-408c-be85-4cd843d20f35 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:50 ha-608611 crio[779]: time="2025-10-09T18:54:50.462670027Z" level=info msg="createCtr: deleting container efabd6faa594ba994f004d753bf838a65d97ccfb9156d81580b0d724e625f762 from storage" id=dc8cbb58-5900-408c-be85-4cd843d20f35 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:54:50 ha-608611 crio[779]: time="2025-10-09T18:54:50.465117034Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-608611_kube-system_aa829d6ea417a48ecaa6f5cad3254d94_0" id=dc8cbb58-5900-408c-be85-4cd843d20f35 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:54:52.569522    4206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:52.570010    4206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:52.571599    4206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:52.572026    4206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:54:52.573571    4206 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:54:52 up  1:37,  0 user,  load average: 0.48, 0.15, 0.11
	Linux ha-608611 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 18:54:41 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:54:41 ha-608611 kubelet[1930]: E1009 18:54:41.468280    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-608611" podUID="b479c8e1034fd1754049af8325a8c50b"
	Oct 09 18:54:45 ha-608611 kubelet[1930]: E1009 18:54:45.440984    1930 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 18:54:45 ha-608611 kubelet[1930]: E1009 18:54:45.464410    1930 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:54:45 ha-608611 kubelet[1930]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:45 ha-608611 kubelet[1930]:  > podSandboxID="3ed86e3854bad44d01adb07f49466fff61fdf9dd10f223587d539b2547828b70"
	Oct 09 18:54:45 ha-608611 kubelet[1930]: E1009 18:54:45.464511    1930 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:54:45 ha-608611 kubelet[1930]:         container kube-apiserver start failed in pod kube-apiserver-ha-608611_kube-system(8c1c5aee1432fcfd0e6519753fb0d668): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:45 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:54:45 ha-608611 kubelet[1930]: E1009 18:54:45.464543    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-608611" podUID="8c1c5aee1432fcfd0e6519753fb0d668"
	Oct 09 18:54:46 ha-608611 kubelet[1930]: E1009 18:54:46.045748    1930 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 09 18:54:46 ha-608611 kubelet[1930]: E1009 18:54:46.152695    1930 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-608611.186ce72dd5388d27  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-608611,UID:ha-608611,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-608611 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-608611,},FirstTimestamp:2025-10-09 18:48:58.431819047 +0000 UTC m=+0.618197321,LastTimestamp:2025-10-09 18:48:58.431819047 +0000 UTC m=+0.618197321,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-608611,}"
	Oct 09 18:54:47 ha-608611 kubelet[1930]: E1009 18:54:47.081114    1930 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-608611?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 18:54:47 ha-608611 kubelet[1930]: I1009 18:54:47.250003    1930 kubelet_node_status.go:75] "Attempting to register node" node="ha-608611"
	Oct 09 18:54:47 ha-608611 kubelet[1930]: E1009 18:54:47.250375    1930 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-608611"
	Oct 09 18:54:48 ha-608611 kubelet[1930]: E1009 18:54:48.459131    1930 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-608611\" not found"
	Oct 09 18:54:49 ha-608611 kubelet[1930]: E1009 18:54:49.995398    1930 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 09 18:54:50 ha-608611 kubelet[1930]: E1009 18:54:50.441450    1930 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 18:54:50 ha-608611 kubelet[1930]: E1009 18:54:50.465491    1930 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:54:50 ha-608611 kubelet[1930]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:50 ha-608611 kubelet[1930]:  > podSandboxID="770c3dd955a8e4513f9e5b862a3cb7f1d4ff6ebd095626539e3d2eb18ba246dc"
	Oct 09 18:54:50 ha-608611 kubelet[1930]: E1009 18:54:50.465606    1930 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:54:50 ha-608611 kubelet[1930]:         container kube-scheduler start failed in pod kube-scheduler-ha-608611_kube-system(aa829d6ea417a48ecaa6f5cad3254d94): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:54:50 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:54:50 ha-608611 kubelet[1930]: E1009 18:54:50.465644    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-608611" podUID="aa829d6ea417a48ecaa6f5cad3254d94"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611: exit status 6 (292.338381ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:54:52.936967   78632 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-608611" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (52.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 node start m02 --alsologtostderr -v 5: exit status 85 (54.385181ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:54:52.992719   78747 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:54:52.993012   78747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:54:52.993021   78747 out.go:374] Setting ErrFile to fd 2...
	I1009 18:54:52.993025   78747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:54:52.993254   78747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:54:52.993513   78747 mustload.go:65] Loading cluster: ha-608611
	I1009 18:54:52.993851   78747 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:54:52.995755   78747 out.go:203] 
	W1009 18:54:52.997159   78747 out.go:285] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1009 18:54:52.997173   78747 out.go:285] * 
	* 
	W1009 18:54:53.000261   78747 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:54:53.001619   78747 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:424: I1009 18:54:52.992719   78747 out.go:360] Setting OutFile to fd 1 ...
I1009 18:54:52.993012   78747 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:54:52.993021   78747 out.go:374] Setting ErrFile to fd 2...
I1009 18:54:52.993025   78747 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:54:52.993254   78747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
I1009 18:54:52.993513   78747 mustload.go:65] Loading cluster: ha-608611
I1009 18:54:52.993851   78747 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:54:52.995755   78747 out.go:203] 
W1009 18:54:52.997159   78747 out.go:285] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1009 18:54:52.997173   78747 out.go:285] * 
* 
W1009 18:54:53.000261   78747 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1009 18:54:53.001619   78747 out.go:203] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-linux-amd64 -p ha-608611 node start m02 --alsologtostderr -v 5": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5: exit status 6 (291.018863ms)

                                                
                                                
-- stdout --
	ha-608611
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:54:53.048552   78758 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:54:53.048829   78758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:54:53.048839   78758 out.go:374] Setting ErrFile to fd 2...
	I1009 18:54:53.048843   78758 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:54:53.049128   78758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:54:53.049384   78758 out.go:368] Setting JSON to false
	I1009 18:54:53.049414   78758 mustload.go:65] Loading cluster: ha-608611
	I1009 18:54:53.049550   78758 notify.go:220] Checking for updates...
	I1009 18:54:53.049828   78758 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:54:53.049843   78758 status.go:174] checking status of ha-608611 ...
	I1009 18:54:53.050300   78758 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:54:53.069320   78758 status.go:371] ha-608611 host status = "Running" (err=<nil>)
	I1009 18:54:53.069360   78758 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:54:53.069677   78758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:54:53.087624   78758 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:54:53.087867   78758 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:54:53.087920   78758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:54:53.105229   78758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:54:53.205263   78758 ssh_runner.go:195] Run: systemctl --version
	I1009 18:54:53.211625   78758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:54:53.223927   78758 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:54:53.282761   78758 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:54:53.273488118 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1009 18:54:53.283229   78758 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:54:53.283258   78758 api_server.go:166] Checking apiserver status ...
	I1009 18:54:53.283297   78758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 18:54:53.293195   78758 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:54:53.293216   78758 status.go:463] ha-608611 apiserver status = Running (err=<nil>)
	I1009 18:54:53.293225   78758 status.go:176] ha-608611 status: &{Name:ha-608611 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1009 18:54:53.297054   14880 retry.go:31] will retry after 996.089224ms: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5: exit status 6 (288.434542ms)

                                                
                                                
-- stdout --
	ha-608611
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:54:54.333874   78899 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:54:54.334129   78899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:54:54.334154   78899 out.go:374] Setting ErrFile to fd 2...
	I1009 18:54:54.334160   78899 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:54:54.334388   78899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:54:54.334592   78899 out.go:368] Setting JSON to false
	I1009 18:54:54.334625   78899 mustload.go:65] Loading cluster: ha-608611
	I1009 18:54:54.334734   78899 notify.go:220] Checking for updates...
	I1009 18:54:54.335094   78899 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:54:54.335111   78899 status.go:174] checking status of ha-608611 ...
	I1009 18:54:54.335630   78899 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:54:54.353773   78899 status.go:371] ha-608611 host status = "Running" (err=<nil>)
	I1009 18:54:54.353797   78899 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:54:54.354051   78899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:54:54.372337   78899 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:54:54.372580   78899 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:54:54.372641   78899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:54:54.390297   78899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:54:54.490274   78899 ssh_runner.go:195] Run: systemctl --version
	I1009 18:54:54.496568   78899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:54:54.508468   78899 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:54:54.567546   78899 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:54:54.557180785 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1009 18:54:54.567923   78899 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:54:54.567946   78899 api_server.go:166] Checking apiserver status ...
	I1009 18:54:54.567976   78899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 18:54:54.578274   78899 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:54:54.578298   78899 status.go:463] ha-608611 apiserver status = Running (err=<nil>)
	I1009 18:54:54.578308   78899 status.go:176] ha-608611 status: &{Name:ha-608611 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1009 18:54:54.582039   14880 retry.go:31] will retry after 1.822175399s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5: exit status 6 (284.965706ms)

                                                
                                                
-- stdout --
	ha-608611
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:54:56.447473   79012 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:54:56.447697   79012 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:54:56.447704   79012 out.go:374] Setting ErrFile to fd 2...
	I1009 18:54:56.447708   79012 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:54:56.447891   79012 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:54:56.448041   79012 out.go:368] Setting JSON to false
	I1009 18:54:56.448068   79012 mustload.go:65] Loading cluster: ha-608611
	I1009 18:54:56.448187   79012 notify.go:220] Checking for updates...
	I1009 18:54:56.448385   79012 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:54:56.448396   79012 status.go:174] checking status of ha-608611 ...
	I1009 18:54:56.448754   79012 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:54:56.466515   79012 status.go:371] ha-608611 host status = "Running" (err=<nil>)
	I1009 18:54:56.466534   79012 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:54:56.466772   79012 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:54:56.483937   79012 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:54:56.484179   79012 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:54:56.484214   79012 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:54:56.503491   79012 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:54:56.602377   79012 ssh_runner.go:195] Run: systemctl --version
	I1009 18:54:56.608832   79012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:54:56.620966   79012 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:54:56.676265   79012 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:54:56.666711734 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1009 18:54:56.676669   79012 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:54:56.676694   79012 api_server.go:166] Checking apiserver status ...
	I1009 18:54:56.676729   79012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 18:54:56.687002   79012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:54:56.687021   79012 status.go:463] ha-608611 apiserver status = Running (err=<nil>)
	I1009 18:54:56.687055   79012 status.go:176] ha-608611 status: &{Name:ha-608611 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1009 18:54:56.691122   14880 retry.go:31] will retry after 3.146826668s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5: exit status 6 (295.137644ms)

                                                
                                                
-- stdout --
	ha-608611
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:54:59.881845   79151 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:54:59.882103   79151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:54:59.882114   79151 out.go:374] Setting ErrFile to fd 2...
	I1009 18:54:59.882117   79151 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:54:59.882331   79151 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:54:59.882499   79151 out.go:368] Setting JSON to false
	I1009 18:54:59.882526   79151 mustload.go:65] Loading cluster: ha-608611
	I1009 18:54:59.882670   79151 notify.go:220] Checking for updates...
	I1009 18:54:59.882863   79151 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:54:59.882878   79151 status.go:174] checking status of ha-608611 ...
	I1009 18:54:59.883364   79151 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:54:59.902704   79151 status.go:371] ha-608611 host status = "Running" (err=<nil>)
	I1009 18:54:59.902732   79151 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:54:59.903069   79151 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:54:59.922786   79151 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:54:59.923053   79151 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:54:59.923087   79151 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:54:59.941868   79151 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:00.041491   79151 ssh_runner.go:195] Run: systemctl --version
	I1009 18:55:00.047969   79151 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:55:00.060482   79151 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:55:00.118810   79151 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:55:00.10902197 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1009 18:55:00.119274   79151 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:55:00.119304   79151 api_server.go:166] Checking apiserver status ...
	I1009 18:55:00.119346   79151 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 18:55:00.129674   79151 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:55:00.129704   79151 status.go:463] ha-608611 apiserver status = Running (err=<nil>)
	I1009 18:55:00.129719   79151 status.go:176] ha-608611 status: &{Name:ha-608611 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1009 18:55:00.133551   14880 retry.go:31] will retry after 3.78971579s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5: exit status 6 (287.175714ms)

                                                
                                                
-- stdout --
	ha-608611
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:55:03.965502   79282 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:55:03.965835   79282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:55:03.965843   79282 out.go:374] Setting ErrFile to fd 2...
	I1009 18:55:03.965846   79282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:55:03.966045   79282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:55:03.966230   79282 out.go:368] Setting JSON to false
	I1009 18:55:03.966258   79282 mustload.go:65] Loading cluster: ha-608611
	I1009 18:55:03.966370   79282 notify.go:220] Checking for updates...
	I1009 18:55:03.966572   79282 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:55:03.966585   79282 status.go:174] checking status of ha-608611 ...
	I1009 18:55:03.966966   79282 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:55:03.986766   79282 status.go:371] ha-608611 host status = "Running" (err=<nil>)
	I1009 18:55:03.986809   79282 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:55:03.987064   79282 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:55:04.004766   79282 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:55:04.005219   79282 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:55:04.005271   79282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:04.024456   79282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:04.124478   79282 ssh_runner.go:195] Run: systemctl --version
	I1009 18:55:04.130511   79282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:55:04.142585   79282 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:55:04.196858   79282 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:55:04.187429683 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1009 18:55:04.197297   79282 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:55:04.197324   79282 api_server.go:166] Checking apiserver status ...
	I1009 18:55:04.197354   79282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 18:55:04.207295   79282 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:55:04.207317   79282 status.go:463] ha-608611 apiserver status = Running (err=<nil>)
	I1009 18:55:04.207330   79282 status.go:176] ha-608611 status: &{Name:ha-608611 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1009 18:55:04.211552   14880 retry.go:31] will retry after 2.929497792s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5: exit status 6 (290.945852ms)

                                                
                                                
-- stdout --
	ha-608611
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:55:07.184924   79402 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:55:07.185179   79402 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:55:07.185186   79402 out.go:374] Setting ErrFile to fd 2...
	I1009 18:55:07.185190   79402 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:55:07.185385   79402 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:55:07.185557   79402 out.go:368] Setting JSON to false
	I1009 18:55:07.185582   79402 mustload.go:65] Loading cluster: ha-608611
	I1009 18:55:07.185728   79402 notify.go:220] Checking for updates...
	I1009 18:55:07.185894   79402 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:55:07.185906   79402 status.go:174] checking status of ha-608611 ...
	I1009 18:55:07.186308   79402 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:55:07.207540   79402 status.go:371] ha-608611 host status = "Running" (err=<nil>)
	I1009 18:55:07.207564   79402 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:55:07.207771   79402 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:55:07.225606   79402 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:55:07.225871   79402 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:55:07.225921   79402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:07.243869   79402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:07.343540   79402 ssh_runner.go:195] Run: systemctl --version
	I1009 18:55:07.349940   79402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:55:07.362676   79402 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:55:07.418848   79402 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:55:07.40794788 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1009 18:55:07.419319   79402 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:55:07.419351   79402 api_server.go:166] Checking apiserver status ...
	I1009 18:55:07.419383   79402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 18:55:07.430237   79402 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:55:07.430258   79402 status.go:463] ha-608611 apiserver status = Running (err=<nil>)
	I1009 18:55:07.430271   79402 status.go:176] ha-608611 status: &{Name:ha-608611 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1009 18:55:07.434707   14880 retry.go:31] will retry after 8.68466311s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5: exit status 6 (291.730099ms)

                                                
                                                
-- stdout --
	ha-608611
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:55:16.167073   79565 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:55:16.167349   79565 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:55:16.167361   79565 out.go:374] Setting ErrFile to fd 2...
	I1009 18:55:16.167367   79565 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:55:16.167608   79565 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:55:16.167804   79565 out.go:368] Setting JSON to false
	I1009 18:55:16.167836   79565 mustload.go:65] Loading cluster: ha-608611
	I1009 18:55:16.167947   79565 notify.go:220] Checking for updates...
	I1009 18:55:16.168300   79565 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:55:16.168317   79565 status.go:174] checking status of ha-608611 ...
	I1009 18:55:16.168882   79565 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:55:16.186769   79565 status.go:371] ha-608611 host status = "Running" (err=<nil>)
	I1009 18:55:16.186794   79565 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:55:16.187065   79565 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:55:16.205024   79565 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:55:16.205343   79565 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:55:16.205408   79565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:16.223419   79565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:16.323316   79565 ssh_runner.go:195] Run: systemctl --version
	I1009 18:55:16.329812   79565 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:55:16.341869   79565 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:55:16.396601   79565 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:55:16.386537838 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1009 18:55:16.396989   79565 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:55:16.397011   79565 api_server.go:166] Checking apiserver status ...
	I1009 18:55:16.397054   79565 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 18:55:16.407252   79565 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:55:16.407281   79565 status.go:463] ha-608611 apiserver status = Running (err=<nil>)
	I1009 18:55:16.407291   79565 status.go:176] ha-608611 status: &{Name:ha-608611 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1009 18:55:16.411536   14880 retry.go:31] will retry after 8.392768087s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5: exit status 6 (290.753476ms)

                                                
                                                
-- stdout --
	ha-608611
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:55:24.846419   79726 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:55:24.846656   79726 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:55:24.846665   79726 out.go:374] Setting ErrFile to fd 2...
	I1009 18:55:24.846669   79726 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:55:24.846859   79726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:55:24.847023   79726 out.go:368] Setting JSON to false
	I1009 18:55:24.847056   79726 mustload.go:65] Loading cluster: ha-608611
	I1009 18:55:24.847112   79726 notify.go:220] Checking for updates...
	I1009 18:55:24.847392   79726 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:55:24.847408   79726 status.go:174] checking status of ha-608611 ...
	I1009 18:55:24.847808   79726 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:55:24.867239   79726 status.go:371] ha-608611 host status = "Running" (err=<nil>)
	I1009 18:55:24.867266   79726 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:55:24.867536   79726 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:55:24.886017   79726 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:55:24.886273   79726 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:55:24.886311   79726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:24.904807   79726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:25.004397   79726 ssh_runner.go:195] Run: systemctl --version
	I1009 18:55:25.010814   79726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:55:25.023000   79726 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:55:25.081122   79726 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:55:25.07076916 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1009 18:55:25.081591   79726 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:55:25.081616   79726 api_server.go:166] Checking apiserver status ...
	I1009 18:55:25.081663   79726 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 18:55:25.091691   79726 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:55:25.091722   79726 status.go:463] ha-608611 apiserver status = Running (err=<nil>)
	I1009 18:55:25.091733   79726 status.go:176] ha-608611 status: &{Name:ha-608611 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1009 18:55:25.096038   14880 retry.go:31] will retry after 19.243968458s: exit status 6
E1009 18:55:34.610728   14880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5: exit status 6 (292.294119ms)

                                                
                                                
-- stdout --
	ha-608611
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:55:44.391419   79939 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:55:44.391653   79939 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:55:44.391661   79939 out.go:374] Setting ErrFile to fd 2...
	I1009 18:55:44.391665   79939 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:55:44.391833   79939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:55:44.391986   79939 out.go:368] Setting JSON to false
	I1009 18:55:44.392013   79939 mustload.go:65] Loading cluster: ha-608611
	I1009 18:55:44.392094   79939 notify.go:220] Checking for updates...
	I1009 18:55:44.392371   79939 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:55:44.392386   79939 status.go:174] checking status of ha-608611 ...
	I1009 18:55:44.393403   79939 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:55:44.412129   79939 status.go:371] ha-608611 host status = "Running" (err=<nil>)
	I1009 18:55:44.412176   79939 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:55:44.412520   79939 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:55:44.432156   79939 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:55:44.432386   79939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:55:44.432428   79939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:44.450750   79939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:44.552524   79939 ssh_runner.go:195] Run: systemctl --version
	I1009 18:55:44.558835   79939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:55:44.571458   79939 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:55:44.627214   79939 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:55:44.617151001 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1009 18:55:44.627782   79939 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:55:44.627820   79939 api_server.go:166] Checking apiserver status ...
	I1009 18:55:44.627862   79939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 18:55:44.638434   79939 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:55:44.638460   79939 status.go:463] ha-608611 apiserver status = Running (err=<nil>)
	I1009 18:55:44.638471   79939 status.go:176] ha-608611 status: &{Name:ha-608611 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-608611
helpers_test.go:243: (dbg) docker inspect ha-608611:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	        "Created": "2025-10-09T18:44:43.71277862Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 68571,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:44:43.760299717Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hostname",
	        "HostsPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hosts",
	        "LogPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c-json.log",
	        "Name": "/ha-608611",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-608611:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-608611",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	                "LowerDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-608611",
	                "Source": "/var/lib/docker/volumes/ha-608611/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-608611",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-608611",
	                "name.minikube.sigs.k8s.io": "ha-608611",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4f6557069285c9379d4788b404b85a7f7332b0f0915fb426eb2d3ffb6f02df65",
	            "SandboxKey": "/var/run/docker/netns/4f6557069285",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-608611": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:dc:55:21:78:3f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d41ad8abecfe5e57fea462a2d7f6665aa3879de8bfc3fe0269f712186c14e257",
	                    "EndpointID": "322add21e309d24bef79b6b7f428ea8a1994c3d46e02d36bb4debf9950e6c0a5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-608611",
	                        "92fc23109156"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611: exit status 6 (284.53678ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:55:44.931781   80069 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image          │ functional-753440 image ls                                                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ delete         │ -p functional-753440                                                                                            │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:44 UTC │ 09 Oct 25 18:44 UTC │
	│ start          │ ha-608611 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:44 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- rollout status deployment/busybox                                                          │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node           │ ha-608611 node add --alsologtostderr -v 5                                                                       │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node           │ ha-608611 node stop m02 --alsologtostderr -v 5                                                                  │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node           │ ha-608611 node start m02 --alsologtostderr -v 5                                                                 │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:44:38
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:44:38.499708   68004 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:44:38.499979   68004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:44:38.499990   68004 out.go:374] Setting ErrFile to fd 2...
	I1009 18:44:38.499995   68004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:44:38.500193   68004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:44:38.500672   68004 out.go:368] Setting JSON to false
	I1009 18:44:38.501534   68004 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5226,"bootTime":1760030252,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:44:38.501651   68004 start.go:141] virtualization: kvm guest
	I1009 18:44:38.503753   68004 out.go:179] * [ha-608611] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:44:38.505161   68004 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:44:38.505174   68004 notify.go:220] Checking for updates...
	I1009 18:44:38.507971   68004 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:44:38.509361   68004 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:44:38.510823   68004 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:44:38.512241   68004 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:44:38.513815   68004 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:44:38.515465   68004 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:44:38.539241   68004 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:44:38.539344   68004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:44:38.597491   68004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:44:38.585969456 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:44:38.597607   68004 docker.go:318] overlay module found
	I1009 18:44:38.599712   68004 out.go:179] * Using the docker driver based on user configuration
	I1009 18:44:38.601190   68004 start.go:305] selected driver: docker
	I1009 18:44:38.601208   68004 start.go:925] validating driver "docker" against <nil>
	I1009 18:44:38.601220   68004 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:44:38.601773   68004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:44:38.656624   68004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:44:38.646723999 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:44:38.656772   68004 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 18:44:38.656973   68004 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:44:38.658777   68004 out.go:179] * Using Docker driver with root privileges
	I1009 18:44:38.660475   68004 cni.go:84] Creating CNI manager for ""
	I1009 18:44:38.660538   68004 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 18:44:38.660548   68004 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:44:38.660625   68004 start.go:349] cluster config:
	{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1009 18:44:38.662228   68004 out.go:179] * Starting "ha-608611" primary control-plane node in "ha-608611" cluster
	I1009 18:44:38.663758   68004 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:44:38.665163   68004 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:44:38.666518   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:38.666553   68004 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:44:38.666561   68004 cache.go:64] Caching tarball of preloaded images
	I1009 18:44:38.666652   68004 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:44:38.666665   68004 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:44:38.666636   68004 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:44:38.667052   68004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:44:38.667080   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json: {Name:mk7eb36c0f629760ce25ed6ea0be36fe97501d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:38.687956   68004 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 18:44:38.687977   68004 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 18:44:38.687999   68004 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:44:38.688029   68004 start.go:360] acquireMachinesLock for ha-608611: {Name:mk7579977ab708dc80cadd5f1683dbd9d0a08d4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:44:38.688196   68004 start.go:364] duration metric: took 118.358µs to acquireMachinesLock for "ha-608611"
	I1009 18:44:38.688228   68004 start.go:93] Provisioning new machine with config: &{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:44:38.688308   68004 start.go:125] createHost starting for "" (driver="docker")
	I1009 18:44:38.690596   68004 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 18:44:38.690877   68004 start.go:159] libmachine.API.Create for "ha-608611" (driver="docker")
	I1009 18:44:38.690915   68004 client.go:168] LocalClient.Create starting
	I1009 18:44:38.691016   68004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem
	I1009 18:44:38.691065   68004 main.go:141] libmachine: Decoding PEM data...
	I1009 18:44:38.691090   68004 main.go:141] libmachine: Parsing certificate...
	I1009 18:44:38.691160   68004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem
	I1009 18:44:38.691207   68004 main.go:141] libmachine: Decoding PEM data...
	I1009 18:44:38.691219   68004 main.go:141] libmachine: Parsing certificate...
	I1009 18:44:38.691649   68004 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:44:38.708961   68004 cli_runner.go:211] docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:44:38.709049   68004 network_create.go:284] running [docker network inspect ha-608611] to gather additional debugging logs...
	I1009 18:44:38.709068   68004 cli_runner.go:164] Run: docker network inspect ha-608611
	W1009 18:44:38.724919   68004 cli_runner.go:211] docker network inspect ha-608611 returned with exit code 1
	I1009 18:44:38.724948   68004 network_create.go:287] error running [docker network inspect ha-608611]: docker network inspect ha-608611: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-608611 not found
	I1009 18:44:38.724959   68004 network_create.go:289] output of [docker network inspect ha-608611]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-608611 not found
	
	** /stderr **
	I1009 18:44:38.725077   68004 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:44:38.743440   68004 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e06100}
	I1009 18:44:38.743492   68004 network_create.go:124] attempt to create docker network ha-608611 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 18:44:38.743548   68004 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-608611 ha-608611
	I1009 18:44:38.802772   68004 network_create.go:108] docker network ha-608611 192.168.49.0/24 created
	I1009 18:44:38.802822   68004 kic.go:121] calculated static IP "192.168.49.2" for the "ha-608611" container
	I1009 18:44:38.802881   68004 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:44:38.820080   68004 cli_runner.go:164] Run: docker volume create ha-608611 --label name.minikube.sigs.k8s.io=ha-608611 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:44:38.840522   68004 oci.go:103] Successfully created a docker volume ha-608611
	I1009 18:44:38.840615   68004 cli_runner.go:164] Run: docker run --rm --name ha-608611-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-608611 --entrypoint /usr/bin/test -v ha-608611:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 18:44:39.244353   68004 oci.go:107] Successfully prepared a docker volume ha-608611
	I1009 18:44:39.244424   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:39.244433   68004 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 18:44:39.244478   68004 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-608611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 18:44:43.640122   68004 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-608611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.39557595s)
	I1009 18:44:43.640175   68004 kic.go:203] duration metric: took 4.395736393s to extract preloaded images to volume ...
	W1009 18:44:43.640303   68004 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 18:44:43.640358   68004 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 18:44:43.640405   68004 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:44:43.696295   68004 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-608611 --name ha-608611 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-608611 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-608611 --network ha-608611 --ip 192.168.49.2 --volume ha-608611:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 18:44:43.979679   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Running}}
	I1009 18:44:43.998229   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.017435   68004 cli_runner.go:164] Run: docker exec ha-608611 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:44:44.066674   68004 oci.go:144] the created container "ha-608611" has a running status.
	I1009 18:44:44.066704   68004 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa...
	I1009 18:44:44.380025   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 18:44:44.380087   68004 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:44:44.405345   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.425476   68004 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:44:44.425501   68004 kic_runner.go:114] Args: [docker exec --privileged ha-608611 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:44:44.469260   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.488635   68004 machine.go:93] provisionDockerMachine start ...
	I1009 18:44:44.488729   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.507225   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.507570   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.507596   68004 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:44:44.655038   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:44:44.655067   68004 ubuntu.go:182] provisioning hostname "ha-608611"
	I1009 18:44:44.655128   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.673982   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.674208   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.674222   68004 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-608611 && echo "ha-608611" | sudo tee /etc/hostname
	I1009 18:44:44.830321   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:44:44.830415   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.848252   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.848464   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.848481   68004 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-608611' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-608611/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-608611' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:44:44.995953   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:44:44.995980   68004 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 18:44:44.995996   68004 ubuntu.go:190] setting up certificates
	I1009 18:44:44.996004   68004 provision.go:84] configureAuth start
	I1009 18:44:44.996061   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.014319   68004 provision.go:143] copyHostCerts
	I1009 18:44:45.014359   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:44:45.014401   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 18:44:45.014411   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:44:45.014491   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 18:44:45.014585   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:44:45.014614   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 18:44:45.014624   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:44:45.014668   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 18:44:45.014744   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:44:45.014769   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 18:44:45.014773   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:44:45.014812   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 18:44:45.014890   68004 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.ha-608611 san=[127.0.0.1 192.168.49.2 ha-608611 localhost minikube]
	I1009 18:44:45.062086   68004 provision.go:177] copyRemoteCerts
	I1009 18:44:45.062191   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:44:45.062224   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.079568   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.182503   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 18:44:45.182590   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 18:44:45.201898   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 18:44:45.201952   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 18:44:45.219004   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 18:44:45.219061   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:44:45.236354   68004 provision.go:87] duration metric: took 240.321663ms to configureAuth
	I1009 18:44:45.236386   68004 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:44:45.236591   68004 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:44:45.236715   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.255084   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:45.255329   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:45.255352   68004 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:44:45.508555   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:44:45.508584   68004 machine.go:96] duration metric: took 1.01992839s to provisionDockerMachine
	I1009 18:44:45.508595   68004 client.go:171] duration metric: took 6.817674141s to LocalClient.Create
	I1009 18:44:45.508615   68004 start.go:167] duration metric: took 6.817737923s to libmachine.API.Create "ha-608611"
	I1009 18:44:45.508627   68004 start.go:293] postStartSetup for "ha-608611" (driver="docker")
	I1009 18:44:45.508641   68004 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:44:45.508698   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:44:45.508733   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.526223   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.630313   68004 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:44:45.633862   68004 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:44:45.633886   68004 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:44:45.633896   68004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 18:44:45.633937   68004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 18:44:45.634010   68004 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 18:44:45.634020   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /etc/ssl/certs/148802.pem
	I1009 18:44:45.634128   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:44:45.641735   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:44:45.661588   68004 start.go:296] duration metric: took 152.943683ms for postStartSetup
	I1009 18:44:45.661893   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.680048   68004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:44:45.680316   68004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:44:45.680352   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.696877   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.796243   68004 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:44:45.800700   68004 start.go:128] duration metric: took 7.112375109s to createHost
	I1009 18:44:45.800729   68004 start.go:83] releasing machines lock for "ha-608611", held for 7.112518345s
	I1009 18:44:45.800791   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.818595   68004 ssh_runner.go:195] Run: cat /version.json
	I1009 18:44:45.818630   68004 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:44:45.818641   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.818688   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.836603   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.836837   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.989177   68004 ssh_runner.go:195] Run: systemctl --version
	I1009 18:44:45.995896   68004 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:44:46.030619   68004 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:44:46.035429   68004 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:44:46.035494   68004 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:44:46.061922   68004 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 18:44:46.061944   68004 start.go:495] detecting cgroup driver to use...
	I1009 18:44:46.061975   68004 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:44:46.062026   68004 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:44:46.077423   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:44:46.089316   68004 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:44:46.089367   68004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:44:46.105696   68004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:44:46.122777   68004 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:44:46.202639   68004 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:44:46.294647   68004 docker.go:234] disabling docker service ...
	I1009 18:44:46.294704   68004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:44:46.312549   68004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:44:46.324800   68004 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:44:46.403433   68004 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:44:46.481222   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:44:46.493645   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:44:46.507931   68004 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:44:46.507979   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.518504   68004 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 18:44:46.518561   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.527328   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.535888   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.544437   68004 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:44:46.552112   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.560275   68004 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.573155   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.581642   68004 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:44:46.588485   68004 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:44:46.595486   68004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:44:46.674187   68004 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:44:46.778236   68004 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:44:46.778294   68004 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:44:46.782264   68004 start.go:563] Will wait 60s for crictl version
	I1009 18:44:46.782319   68004 ssh_runner.go:195] Run: which crictl
	I1009 18:44:46.785887   68004 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:44:46.809717   68004 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:44:46.809792   68004 ssh_runner.go:195] Run: crio --version
	I1009 18:44:46.837446   68004 ssh_runner.go:195] Run: crio --version
	I1009 18:44:46.867516   68004 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:44:46.869002   68004 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:44:46.886298   68004 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:44:46.890354   68004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:44:46.901206   68004 kubeadm.go:883] updating cluster {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:44:46.901331   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:46.901390   68004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:44:46.933183   68004 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:44:46.933203   68004 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:44:46.933255   68004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:44:46.959025   68004 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:44:46.959053   68004 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:44:46.959062   68004 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 18:44:46.959174   68004 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-608611 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:44:46.959248   68004 ssh_runner.go:195] Run: crio config
	I1009 18:44:47.005223   68004 cni.go:84] Creating CNI manager for ""
	I1009 18:44:47.005245   68004 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 18:44:47.005269   68004 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:44:47.005302   68004 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-608611 NodeName:ha-608611 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:44:47.005420   68004 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-608611"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:44:47.005441   68004 kube-vip.go:115] generating kube-vip config ...
	I1009 18:44:47.005483   68004 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 18:44:47.017646   68004 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:44:47.017751   68004 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1009 18:44:47.017813   68004 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:44:47.025763   68004 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:44:47.025815   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 18:44:47.033769   68004 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 18:44:47.046390   68004 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:44:47.062352   68004 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 18:44:47.075248   68004 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1009 18:44:47.090154   68004 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 18:44:47.093985   68004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:44:47.104234   68004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:44:47.185443   68004 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:44:47.207477   68004 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611 for IP: 192.168.49.2
	I1009 18:44:47.207503   68004 certs.go:195] generating shared ca certs ...
	I1009 18:44:47.207525   68004 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.207676   68004 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 18:44:47.207726   68004 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 18:44:47.207736   68004 certs.go:257] generating profile certs ...
	I1009 18:44:47.207784   68004 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key
	I1009 18:44:47.207802   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt with IP's: []
	I1009 18:44:47.296415   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt ...
	I1009 18:44:47.296444   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt: {Name:mka7495c49ff81b322387640c5f8be05bb8b97aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.296615   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key ...
	I1009 18:44:47.296627   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key: {Name:mk151a9783426d352762013576861912ee213cd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.296698   68004 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3
	I1009 18:44:47.296712   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1009 18:44:47.614912   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 ...
	I1009 18:44:47.614937   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3: {Name:mkf40b70da82ca6969886952002da4a653b30ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.615095   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3 ...
	I1009 18:44:47.615110   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3: {Name:mkd83b705c3cec74b71d7424d9484d8c52a44a8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.615192   68004 certs.go:382] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt
	I1009 18:44:47.615283   68004 certs.go:386] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3 -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key
	I1009 18:44:47.615388   68004 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key
	I1009 18:44:47.615408   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt with IP's: []
	I1009 18:44:47.855559   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt ...
	I1009 18:44:47.855590   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt: {Name:mkb45be1e91a0e10b00b60bd353288b3ec0a365b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.855750   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key ...
	I1009 18:44:47.855762   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key: {Name:mk173c05f4fc9659f1f76c6f2e2f3e956fd65bbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.855826   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 18:44:47.855839   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 18:44:47.855850   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 18:44:47.855863   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 18:44:47.855878   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 18:44:47.855890   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 18:44:47.855902   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 18:44:47.855914   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 18:44:47.855955   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 18:44:47.855989   68004 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 18:44:47.855998   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:44:47.856027   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 18:44:47.856050   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:44:47.856071   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 18:44:47.856108   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:44:47.856132   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:47.856159   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem -> /usr/share/ca-certificates/14880.pem
	I1009 18:44:47.856171   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /usr/share/ca-certificates/148802.pem
	I1009 18:44:47.856652   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:44:47.875170   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:44:47.892939   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:44:47.910593   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:44:47.927971   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 18:44:47.945367   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:44:47.962453   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:44:47.979768   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:44:47.996498   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:44:48.015667   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 18:44:48.032775   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 18:44:48.049777   68004 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:44:48.062232   68004 ssh_runner.go:195] Run: openssl version
	I1009 18:44:48.068333   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 18:44:48.076746   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.080306   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.080361   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.114497   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:44:48.123987   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:44:48.134109   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.138265   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.138325   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.173947   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:44:48.182505   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 18:44:48.190879   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.194449   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.194493   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.227813   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 18:44:48.236520   68004 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:44:48.239954   68004 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 18:44:48.240015   68004 kubeadm.go:400] StartCluster: {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:44:48.240093   68004 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:44:48.240133   68004 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:44:48.266457   68004 cri.go:89] found id: ""
	I1009 18:44:48.266520   68004 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:44:48.274981   68004 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:44:48.282927   68004 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:44:48.282975   68004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:44:48.290558   68004 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:44:48.290617   68004 kubeadm.go:157] found existing configuration files:
	
	I1009 18:44:48.290662   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:44:48.297883   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:44:48.297940   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:44:48.305298   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:44:48.312630   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:44:48.312685   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:44:48.320277   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:44:48.328028   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:44:48.328075   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:44:48.335714   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:44:48.343631   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:44:48.343682   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:44:48.351389   68004 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:44:48.409985   68004 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:44:48.468687   68004 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:48:52.176412   68004 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1009 18:48:52.176606   68004 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:48:52.179343   68004 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:48:52.179469   68004 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:48:52.179692   68004 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:48:52.179825   68004 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:48:52.179919   68004 kubeadm.go:318] OS: Linux
	I1009 18:48:52.180033   68004 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:48:52.180167   68004 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:48:52.180261   68004 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:48:52.180339   68004 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:48:52.180423   68004 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:48:52.180506   68004 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:48:52.180585   68004 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:48:52.180650   68004 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:48:52.180730   68004 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:48:52.180858   68004 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:48:52.181038   68004 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:48:52.181129   68004 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:48:52.183066   68004 out.go:252]   - Generating certificates and keys ...
	I1009 18:48:52.183199   68004 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:48:52.183278   68004 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:48:52.183337   68004 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 18:48:52.183388   68004 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 18:48:52.183456   68004 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 18:48:52.183531   68004 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 18:48:52.183609   68004 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 18:48:52.183734   68004 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:48:52.183814   68004 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 18:48:52.183946   68004 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:48:52.184022   68004 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 18:48:52.184077   68004 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 18:48:52.184120   68004 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 18:48:52.184209   68004 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:48:52.184289   68004 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:48:52.184373   68004 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:48:52.184446   68004 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:48:52.184545   68004 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:48:52.184650   68004 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:48:52.184751   68004 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:48:52.184845   68004 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:48:52.187212   68004 out.go:252]   - Booting up control plane ...
	I1009 18:48:52.187314   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:48:52.187403   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:48:52.187495   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:48:52.187618   68004 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:48:52.187764   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:48:52.187905   68004 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:48:52.188016   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:48:52.188092   68004 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:48:52.188271   68004 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:48:52.188367   68004 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:48:52.188438   68004 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001064091s
	I1009 18:48:52.188532   68004 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:48:52.188631   68004 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 18:48:52.188753   68004 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:48:52.188835   68004 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:48:52.188944   68004 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00065849s
	I1009 18:48:52.189053   68004 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000822023s
	I1009 18:48:52.189176   68004 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00103559s
	I1009 18:48:52.189186   68004 kubeadm.go:318] 
	I1009 18:48:52.189288   68004 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:48:52.189417   68004 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:48:52.189507   68004 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:48:52.189604   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:48:52.189710   68004 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:48:52.189827   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:48:52.189851   68004 kubeadm.go:318] 
	W1009 18:48:52.189997   68004 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001064091s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00065849s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000822023s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00103559s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 18:48:52.190074   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 18:48:54.957990   68004 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.767888592s)
	I1009 18:48:54.958062   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:48:54.971165   68004 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:48:54.971216   68004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:48:54.979630   68004 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:48:54.979649   68004 kubeadm.go:157] found existing configuration files:
	
	I1009 18:48:54.979696   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:48:54.987819   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:48:54.987884   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:48:54.995953   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:48:55.003976   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:48:55.004081   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:48:55.011851   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:48:55.019991   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:48:55.020043   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:48:55.027959   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:48:55.036070   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:48:55.036117   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:48:55.043823   68004 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:48:55.102132   68004 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:48:55.161990   68004 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:52:58.820119   68004 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 18:52:58.820247   68004 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:52:58.823463   68004 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:52:58.823551   68004 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:52:58.823686   68004 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:52:58.823770   68004 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:52:58.823834   68004 kubeadm.go:318] OS: Linux
	I1009 18:52:58.823882   68004 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:52:58.823967   68004 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:52:58.824039   68004 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:52:58.824112   68004 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:52:58.824209   68004 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:52:58.824278   68004 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:52:58.824339   68004 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:52:58.824385   68004 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:52:58.824446   68004 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:52:58.824525   68004 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:52:58.824621   68004 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:52:58.824718   68004 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:52:58.828177   68004 out.go:252]   - Generating certificates and keys ...
	I1009 18:52:58.828267   68004 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:52:58.828359   68004 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:52:58.828476   68004 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 18:52:58.828530   68004 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 18:52:58.828586   68004 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 18:52:58.828629   68004 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 18:52:58.828684   68004 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 18:52:58.828737   68004 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 18:52:58.828800   68004 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 18:52:58.828859   68004 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 18:52:58.828890   68004 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 18:52:58.828973   68004 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:52:58.829058   68004 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:52:58.829168   68004 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:52:58.829228   68004 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:52:58.829307   68004 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:52:58.829375   68004 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:52:58.829446   68004 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:52:58.829507   68004 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:52:58.830918   68004 out.go:252]   - Booting up control plane ...
	I1009 18:52:58.831004   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:52:58.831088   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:52:58.831162   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:52:58.831271   68004 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:52:58.831374   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:52:58.831475   68004 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:52:58.831547   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:52:58.831602   68004 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:52:58.831715   68004 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:52:58.831812   68004 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:52:58.831876   68004 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000946171s
	I1009 18:52:58.831960   68004 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:52:58.832028   68004 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 18:52:58.832113   68004 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:52:58.832207   68004 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:52:58.832277   68004 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	I1009 18:52:58.832347   68004 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	I1009 18:52:58.832422   68004 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	I1009 18:52:58.832428   68004 kubeadm.go:318] 
	I1009 18:52:58.832506   68004 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:52:58.832579   68004 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:52:58.832656   68004 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:52:58.832741   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:52:58.832805   68004 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:52:58.832888   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:52:58.832970   68004 kubeadm.go:402] duration metric: took 8m10.592960723s to StartCluster
	I1009 18:52:58.832981   68004 kubeadm.go:318] 
	I1009 18:52:58.833031   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:52:58.833085   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:52:58.861225   68004 cri.go:89] found id: ""
	I1009 18:52:58.861266   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.861281   68004 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:52:58.861287   68004 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:52:58.861341   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:52:58.888167   68004 cri.go:89] found id: ""
	I1009 18:52:58.888195   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.888205   68004 logs.go:284] No container was found matching "etcd"
	I1009 18:52:58.888212   68004 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:52:58.888287   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:52:58.914349   68004 cri.go:89] found id: ""
	I1009 18:52:58.914374   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.914384   68004 logs.go:284] No container was found matching "coredns"
	I1009 18:52:58.914390   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:52:58.914453   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:52:58.940856   68004 cri.go:89] found id: ""
	I1009 18:52:58.940884   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.940892   68004 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:52:58.940898   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:52:58.940949   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:52:58.967634   68004 cri.go:89] found id: ""
	I1009 18:52:58.967660   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.967668   68004 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:52:58.967675   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:52:58.967737   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:52:58.994857   68004 cri.go:89] found id: ""
	I1009 18:52:58.994884   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.994892   68004 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:52:58.994897   68004 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:52:58.994951   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:52:59.022250   68004 cri.go:89] found id: ""
	I1009 18:52:59.022280   68004 logs.go:282] 0 containers: []
	W1009 18:52:59.022296   68004 logs.go:284] No container was found matching "kindnet"
	I1009 18:52:59.022305   68004 logs.go:123] Gathering logs for container status ...
	I1009 18:52:59.022316   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:52:59.050362   68004 logs.go:123] Gathering logs for kubelet ...
	I1009 18:52:59.050466   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:52:59.114521   68004 logs.go:123] Gathering logs for dmesg ...
	I1009 18:52:59.114560   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:52:59.126721   68004 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:52:59.126746   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:52:59.184497   68004 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:52:59.177217    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.177807    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179451    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179888    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.181458    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:52:59.177217    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.177807    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179451    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179888    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.181458    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:52:59.184526   68004 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:52:59.184536   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1009 18:52:59.243650   68004 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 18:52:59.243716   68004 out.go:285] * 
	W1009 18:52:59.243784   68004 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:52:59.243799   68004 out.go:285] * 
	W1009 18:52:59.245479   68004 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:52:59.249165   68004 out.go:203] 
	W1009 18:52:59.250590   68004 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:52:59.250620   68004 out.go:285] * 
	I1009 18:52:59.252112   68004 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 18:55:38 ha-608611 crio[779]: time="2025-10-09T18:55:38.464584103Z" level=info msg="createCtr: removing container 66efc4ce1bb6b7a68e4f7a64dc88597c07ed1d33405a1af6f5e25aafbfc36870" id=083e23b1-3cf8-49b4-a617-f8e61c79aa41 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:55:38 ha-608611 crio[779]: time="2025-10-09T18:55:38.464682796Z" level=info msg="createCtr: deleting container 66efc4ce1bb6b7a68e4f7a64dc88597c07ed1d33405a1af6f5e25aafbfc36870 from storage" id=083e23b1-3cf8-49b4-a617-f8e61c79aa41 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:55:38 ha-608611 crio[779]: time="2025-10-09T18:55:38.468233204Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-608611_kube-system_aa829d6ea417a48ecaa6f5cad3254d94_0" id=083e23b1-3cf8-49b4-a617-f8e61c79aa41 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:55:43 ha-608611 crio[779]: time="2025-10-09T18:55:43.442097299Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=972cce24-c28c-4aa3-bb38-a0088b966b42 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:55:43 ha-608611 crio[779]: time="2025-10-09T18:55:43.443018461Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=f5cf8edd-ddc2-4b04-a02c-9e7bb9c809cc name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:55:43 ha-608611 crio[779]: time="2025-10-09T18:55:43.443979245Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-608611/kube-apiserver" id=c730575e-275c-47fb-891e-f453e9c771f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:55:43 ha-608611 crio[779]: time="2025-10-09T18:55:43.444258189Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:55:43 ha-608611 crio[779]: time="2025-10-09T18:55:43.447529221Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:55:43 ha-608611 crio[779]: time="2025-10-09T18:55:43.447998742Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:55:43 ha-608611 crio[779]: time="2025-10-09T18:55:43.463426171Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=c730575e-275c-47fb-891e-f453e9c771f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:55:43 ha-608611 crio[779]: time="2025-10-09T18:55:43.464844464Z" level=info msg="createCtr: deleting container ID a4e348fe8c0634be7997fa851a9d7874ff42e4fcc6b6ded7430d4d440fa54d76 from idIndex" id=c730575e-275c-47fb-891e-f453e9c771f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:55:43 ha-608611 crio[779]: time="2025-10-09T18:55:43.464882354Z" level=info msg="createCtr: removing container a4e348fe8c0634be7997fa851a9d7874ff42e4fcc6b6ded7430d4d440fa54d76" id=c730575e-275c-47fb-891e-f453e9c771f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:55:43 ha-608611 crio[779]: time="2025-10-09T18:55:43.464921474Z" level=info msg="createCtr: deleting container a4e348fe8c0634be7997fa851a9d7874ff42e4fcc6b6ded7430d4d440fa54d76 from storage" id=c730575e-275c-47fb-891e-f453e9c771f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:55:43 ha-608611 crio[779]: time="2025-10-09T18:55:43.466974765Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-608611_kube-system_8c1c5aee1432fcfd0e6519753fb0d668_0" id=c730575e-275c-47fb-891e-f453e9c771f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:55:44 ha-608611 crio[779]: time="2025-10-09T18:55:44.441682973Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=a5e334c8-b59c-4949-8280-bc0330f89259 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:55:44 ha-608611 crio[779]: time="2025-10-09T18:55:44.442634699Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=49f09a5b-0988-40a6-8dbe-9b4e726fc2f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:55:44 ha-608611 crio[779]: time="2025-10-09T18:55:44.443494892Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-608611/kube-controller-manager" id=4b96f54f-2e05-441f-b47e-c327f8279d72 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:55:44 ha-608611 crio[779]: time="2025-10-09T18:55:44.443724316Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:55:44 ha-608611 crio[779]: time="2025-10-09T18:55:44.448307137Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:55:44 ha-608611 crio[779]: time="2025-10-09T18:55:44.448750659Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:55:44 ha-608611 crio[779]: time="2025-10-09T18:55:44.45910449Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=4b96f54f-2e05-441f-b47e-c327f8279d72 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:55:44 ha-608611 crio[779]: time="2025-10-09T18:55:44.460482171Z" level=info msg="createCtr: deleting container ID c9001fcb027033e1331eceba04cac65f617fa86b27c007794e067cfc50667267 from idIndex" id=4b96f54f-2e05-441f-b47e-c327f8279d72 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:55:44 ha-608611 crio[779]: time="2025-10-09T18:55:44.460524339Z" level=info msg="createCtr: removing container c9001fcb027033e1331eceba04cac65f617fa86b27c007794e067cfc50667267" id=4b96f54f-2e05-441f-b47e-c327f8279d72 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:55:44 ha-608611 crio[779]: time="2025-10-09T18:55:44.46056105Z" level=info msg="createCtr: deleting container c9001fcb027033e1331eceba04cac65f617fa86b27c007794e067cfc50667267 from storage" id=4b96f54f-2e05-441f-b47e-c327f8279d72 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:55:44 ha-608611 crio[779]: time="2025-10-09T18:55:44.462856948Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-608611_kube-system_cc9d45d79042caf53449ab6317965aad_0" id=4b96f54f-2e05-441f-b47e-c327f8279d72 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:55:45.513949    4611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:55:45.514506    4611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:55:45.516134    4611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:55:45.516620    4611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:55:45.518200    4611 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:55:45 up  1:38,  0 user,  load average: 0.27, 0.15, 0.11
	Linux ha-608611 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 18:55:43 ha-608611 kubelet[1930]: E1009 18:55:43.266214    1930 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-608611"
	Oct 09 18:55:43 ha-608611 kubelet[1930]: E1009 18:55:43.441609    1930 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 18:55:43 ha-608611 kubelet[1930]: E1009 18:55:43.467340    1930 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:55:43 ha-608611 kubelet[1930]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:55:43 ha-608611 kubelet[1930]:  > podSandboxID="3ed86e3854bad44d01adb07f49466fff61fdf9dd10f223587d539b2547828b70"
	Oct 09 18:55:43 ha-608611 kubelet[1930]: E1009 18:55:43.467459    1930 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:55:43 ha-608611 kubelet[1930]:         container kube-apiserver start failed in pod kube-apiserver-ha-608611_kube-system(8c1c5aee1432fcfd0e6519753fb0d668): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:55:43 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:55:43 ha-608611 kubelet[1930]: E1009 18:55:43.467501    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-608611" podUID="8c1c5aee1432fcfd0e6519753fb0d668"
	Oct 09 18:55:44 ha-608611 kubelet[1930]: E1009 18:55:44.441244    1930 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 18:55:44 ha-608611 kubelet[1930]: E1009 18:55:44.463177    1930 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:55:44 ha-608611 kubelet[1930]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:55:44 ha-608611 kubelet[1930]:  > podSandboxID="2ef2b90afa617b399f6036f17dc5f1152d378da5043adff2fc3afde192bc8693"
	Oct 09 18:55:44 ha-608611 kubelet[1930]: E1009 18:55:44.463304    1930 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:55:44 ha-608611 kubelet[1930]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-608611_kube-system(cc9d45d79042caf53449ab6317965aad): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:55:44 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:55:44 ha-608611 kubelet[1930]: E1009 18:55:44.463348    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-608611" podUID="cc9d45d79042caf53449ab6317965aad"
	Oct 09 18:55:45 ha-608611 kubelet[1930]: E1009 18:55:45.441359    1930 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 18:55:45 ha-608611 kubelet[1930]: E1009 18:55:45.466231    1930 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:55:45 ha-608611 kubelet[1930]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:55:45 ha-608611 kubelet[1930]:  > podSandboxID="85e631b34b7cd8e30736ecbe7d81581bf5cedb0c5abd8815458e28a54592f51e"
	Oct 09 18:55:45 ha-608611 kubelet[1930]: E1009 18:55:45.466335    1930 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:55:45 ha-608611 kubelet[1930]:         container etcd start failed in pod etcd-ha-608611_kube-system(b479c8e1034fd1754049af8325a8c50b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:55:45 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:55:45 ha-608611 kubelet[1930]: E1009 18:55:45.466365    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-608611" podUID="b479c8e1034fd1754049af8325a8c50b"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611: exit status 6 (295.08144ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:55:45.886544   80404 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-608611" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (52.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-608611" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-608611\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-608611\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nf
sshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-608611\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonIm
ages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-608611" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-608611\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-608611\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-608611\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\
"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --
output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-608611
helpers_test.go:243: (dbg) docker inspect ha-608611:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	        "Created": "2025-10-09T18:44:43.71277862Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 68571,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:44:43.760299717Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hostname",
	        "HostsPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hosts",
	        "LogPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c-json.log",
	        "Name": "/ha-608611",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-608611:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-608611",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	                "LowerDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-608611",
	                "Source": "/var/lib/docker/volumes/ha-608611/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-608611",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-608611",
	                "name.minikube.sigs.k8s.io": "ha-608611",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4f6557069285c9379d4788b404b85a7f7332b0f0915fb426eb2d3ffb6f02df65",
	            "SandboxKey": "/var/run/docker/netns/4f6557069285",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-608611": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:dc:55:21:78:3f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d41ad8abecfe5e57fea462a2d7f6665aa3879de8bfc3fe0269f712186c14e257",
	                    "EndpointID": "322add21e309d24bef79b6b7f428ea8a1994c3d46e02d36bb4debf9950e6c0a5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-608611",
	                        "92fc23109156"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611: exit status 6 (294.430891ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:55:46.515025   80652 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ update-context │ functional-753440 update-context --alsologtostderr -v=2                                                         │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ image          │ functional-753440 image ls                                                                                      │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:40 UTC │ 09 Oct 25 18:40 UTC │
	│ delete         │ -p functional-753440                                                                                            │ functional-753440 │ jenkins │ v1.37.0 │ 09 Oct 25 18:44 UTC │ 09 Oct 25 18:44 UTC │
	│ start          │ ha-608611 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:44 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- rollout status deployment/busybox                                                          │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl        │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node           │ ha-608611 node add --alsologtostderr -v 5                                                                       │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node           │ ha-608611 node stop m02 --alsologtostderr -v 5                                                                  │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node           │ ha-608611 node start m02 --alsologtostderr -v 5                                                                 │ ha-608611         │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:44:38
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:44:38.499708   68004 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:44:38.499979   68004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:44:38.499990   68004 out.go:374] Setting ErrFile to fd 2...
	I1009 18:44:38.499995   68004 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:44:38.500193   68004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:44:38.500672   68004 out.go:368] Setting JSON to false
	I1009 18:44:38.501534   68004 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5226,"bootTime":1760030252,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:44:38.501651   68004 start.go:141] virtualization: kvm guest
	I1009 18:44:38.503753   68004 out.go:179] * [ha-608611] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:44:38.505161   68004 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:44:38.505174   68004 notify.go:220] Checking for updates...
	I1009 18:44:38.507971   68004 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:44:38.509361   68004 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:44:38.510823   68004 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:44:38.512241   68004 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:44:38.513815   68004 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:44:38.515465   68004 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:44:38.539241   68004 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:44:38.539344   68004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:44:38.597491   68004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:44:38.585969456 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:44:38.597607   68004 docker.go:318] overlay module found
	I1009 18:44:38.599712   68004 out.go:179] * Using the docker driver based on user configuration
	I1009 18:44:38.601190   68004 start.go:305] selected driver: docker
	I1009 18:44:38.601208   68004 start.go:925] validating driver "docker" against <nil>
	I1009 18:44:38.601220   68004 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:44:38.601773   68004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:44:38.656624   68004 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:44:38.646723999 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:44:38.656772   68004 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 18:44:38.656973   68004 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:44:38.658777   68004 out.go:179] * Using Docker driver with root privileges
	I1009 18:44:38.660475   68004 cni.go:84] Creating CNI manager for ""
	I1009 18:44:38.660538   68004 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 18:44:38.660548   68004 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:44:38.660625   68004 start.go:349] cluster config:
	{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1009 18:44:38.662228   68004 out.go:179] * Starting "ha-608611" primary control-plane node in "ha-608611" cluster
	I1009 18:44:38.663758   68004 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:44:38.665163   68004 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:44:38.666518   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:38.666553   68004 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:44:38.666561   68004 cache.go:64] Caching tarball of preloaded images
	I1009 18:44:38.666652   68004 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:44:38.666665   68004 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:44:38.666636   68004 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:44:38.667052   68004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:44:38.667080   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json: {Name:mk7eb36c0f629760ce25ed6ea0be36fe97501d95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:38.687956   68004 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 18:44:38.687977   68004 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 18:44:38.687999   68004 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:44:38.688029   68004 start.go:360] acquireMachinesLock for ha-608611: {Name:mk7579977ab708dc80cadd5f1683dbd9d0a08d4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:44:38.688196   68004 start.go:364] duration metric: took 118.358µs to acquireMachinesLock for "ha-608611"
	I1009 18:44:38.688228   68004 start.go:93] Provisioning new machine with config: &{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:44:38.688308   68004 start.go:125] createHost starting for "" (driver="docker")
	I1009 18:44:38.690596   68004 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1009 18:44:38.690877   68004 start.go:159] libmachine.API.Create for "ha-608611" (driver="docker")
	I1009 18:44:38.690915   68004 client.go:168] LocalClient.Create starting
	I1009 18:44:38.691016   68004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem
	I1009 18:44:38.691065   68004 main.go:141] libmachine: Decoding PEM data...
	I1009 18:44:38.691090   68004 main.go:141] libmachine: Parsing certificate...
	I1009 18:44:38.691160   68004 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem
	I1009 18:44:38.691207   68004 main.go:141] libmachine: Decoding PEM data...
	I1009 18:44:38.691219   68004 main.go:141] libmachine: Parsing certificate...
	I1009 18:44:38.691649   68004 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:44:38.708961   68004 cli_runner.go:211] docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:44:38.709049   68004 network_create.go:284] running [docker network inspect ha-608611] to gather additional debugging logs...
	I1009 18:44:38.709068   68004 cli_runner.go:164] Run: docker network inspect ha-608611
	W1009 18:44:38.724919   68004 cli_runner.go:211] docker network inspect ha-608611 returned with exit code 1
	I1009 18:44:38.724948   68004 network_create.go:287] error running [docker network inspect ha-608611]: docker network inspect ha-608611: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-608611 not found
	I1009 18:44:38.724959   68004 network_create.go:289] output of [docker network inspect ha-608611]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-608611 not found
	
	** /stderr **
	I1009 18:44:38.725077   68004 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:44:38.743440   68004 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e06100}
	I1009 18:44:38.743492   68004 network_create.go:124] attempt to create docker network ha-608611 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 18:44:38.743548   68004 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-608611 ha-608611
	I1009 18:44:38.802772   68004 network_create.go:108] docker network ha-608611 192.168.49.0/24 created
	I1009 18:44:38.802822   68004 kic.go:121] calculated static IP "192.168.49.2" for the "ha-608611" container
	I1009 18:44:38.802881   68004 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:44:38.820080   68004 cli_runner.go:164] Run: docker volume create ha-608611 --label name.minikube.sigs.k8s.io=ha-608611 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:44:38.840522   68004 oci.go:103] Successfully created a docker volume ha-608611
	I1009 18:44:38.840615   68004 cli_runner.go:164] Run: docker run --rm --name ha-608611-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-608611 --entrypoint /usr/bin/test -v ha-608611:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 18:44:39.244353   68004 oci.go:107] Successfully prepared a docker volume ha-608611
	I1009 18:44:39.244424   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:39.244433   68004 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 18:44:39.244478   68004 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-608611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 18:44:43.640122   68004 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-608611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.39557595s)
	I1009 18:44:43.640175   68004 kic.go:203] duration metric: took 4.395736393s to extract preloaded images to volume ...
	W1009 18:44:43.640303   68004 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 18:44:43.640358   68004 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 18:44:43.640405   68004 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:44:43.696295   68004 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-608611 --name ha-608611 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-608611 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-608611 --network ha-608611 --ip 192.168.49.2 --volume ha-608611:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 18:44:43.979679   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Running}}
	I1009 18:44:43.998229   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.017435   68004 cli_runner.go:164] Run: docker exec ha-608611 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:44:44.066674   68004 oci.go:144] the created container "ha-608611" has a running status.
	I1009 18:44:44.066704   68004 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa...
	I1009 18:44:44.380025   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 18:44:44.380087   68004 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:44:44.405345   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.425476   68004 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:44:44.425501   68004 kic_runner.go:114] Args: [docker exec --privileged ha-608611 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:44:44.469260   68004 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:44:44.488635   68004 machine.go:93] provisionDockerMachine start ...
	I1009 18:44:44.488729   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.507225   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.507570   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.507596   68004 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:44:44.655038   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:44:44.655067   68004 ubuntu.go:182] provisioning hostname "ha-608611"
	I1009 18:44:44.655128   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.673982   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.674208   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.674222   68004 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-608611 && echo "ha-608611" | sudo tee /etc/hostname
	I1009 18:44:44.830321   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:44:44.830415   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:44.848252   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:44.848464   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:44.848481   68004 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-608611' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-608611/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-608611' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:44:44.995953   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:44:44.995980   68004 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 18:44:44.995996   68004 ubuntu.go:190] setting up certificates
	I1009 18:44:44.996004   68004 provision.go:84] configureAuth start
	I1009 18:44:44.996061   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.014319   68004 provision.go:143] copyHostCerts
	I1009 18:44:45.014359   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:44:45.014401   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 18:44:45.014411   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:44:45.014491   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 18:44:45.014585   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:44:45.014614   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 18:44:45.014624   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:44:45.014668   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 18:44:45.014744   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:44:45.014769   68004 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 18:44:45.014773   68004 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:44:45.014812   68004 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 18:44:45.014890   68004 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.ha-608611 san=[127.0.0.1 192.168.49.2 ha-608611 localhost minikube]
	I1009 18:44:45.062086   68004 provision.go:177] copyRemoteCerts
	I1009 18:44:45.062191   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:44:45.062224   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.079568   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.182503   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 18:44:45.182590   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 18:44:45.201898   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 18:44:45.201952   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 18:44:45.219004   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 18:44:45.219061   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:44:45.236354   68004 provision.go:87] duration metric: took 240.321663ms to configureAuth
	I1009 18:44:45.236386   68004 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:44:45.236591   68004 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:44:45.236715   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.255084   68004 main.go:141] libmachine: Using SSH client type: native
	I1009 18:44:45.255329   68004 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1009 18:44:45.255352   68004 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:44:45.508555   68004 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:44:45.508584   68004 machine.go:96] duration metric: took 1.01992839s to provisionDockerMachine
	I1009 18:44:45.508595   68004 client.go:171] duration metric: took 6.817674141s to LocalClient.Create
	I1009 18:44:45.508615   68004 start.go:167] duration metric: took 6.817737923s to libmachine.API.Create "ha-608611"
	I1009 18:44:45.508627   68004 start.go:293] postStartSetup for "ha-608611" (driver="docker")
	I1009 18:44:45.508641   68004 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:44:45.508698   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:44:45.508733   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.526223   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.630313   68004 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:44:45.633862   68004 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:44:45.633886   68004 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:44:45.633896   68004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 18:44:45.633937   68004 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 18:44:45.634010   68004 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 18:44:45.634020   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /etc/ssl/certs/148802.pem
	I1009 18:44:45.634128   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:44:45.641735   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:44:45.661588   68004 start.go:296] duration metric: took 152.943683ms for postStartSetup
	I1009 18:44:45.661893   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.680048   68004 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:44:45.680316   68004 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:44:45.680352   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.696877   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.796243   68004 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:44:45.800700   68004 start.go:128] duration metric: took 7.112375109s to createHost
	I1009 18:44:45.800729   68004 start.go:83] releasing machines lock for "ha-608611", held for 7.112518345s
	I1009 18:44:45.800791   68004 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:44:45.818595   68004 ssh_runner.go:195] Run: cat /version.json
	I1009 18:44:45.818630   68004 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:44:45.818641   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.818688   68004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:44:45.836603   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.836837   68004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:44:45.989177   68004 ssh_runner.go:195] Run: systemctl --version
	I1009 18:44:45.995896   68004 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:44:46.030619   68004 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:44:46.035429   68004 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:44:46.035494   68004 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:44:46.061922   68004 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 18:44:46.061944   68004 start.go:495] detecting cgroup driver to use...
	I1009 18:44:46.061975   68004 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:44:46.062026   68004 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:44:46.077423   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:44:46.089316   68004 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:44:46.089367   68004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:44:46.105696   68004 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:44:46.122777   68004 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:44:46.202639   68004 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:44:46.294647   68004 docker.go:234] disabling docker service ...
	I1009 18:44:46.294704   68004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:44:46.312549   68004 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:44:46.324800   68004 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:44:46.403433   68004 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:44:46.481222   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:44:46.493645   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:44:46.507931   68004 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:44:46.507979   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.518504   68004 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 18:44:46.518561   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.527328   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.535888   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.544437   68004 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:44:46.552112   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.560275   68004 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.573155   68004 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:44:46.581642   68004 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:44:46.588485   68004 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:44:46.595486   68004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:44:46.674187   68004 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:44:46.778236   68004 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:44:46.778294   68004 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:44:46.782264   68004 start.go:563] Will wait 60s for crictl version
	I1009 18:44:46.782319   68004 ssh_runner.go:195] Run: which crictl
	I1009 18:44:46.785887   68004 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:44:46.809717   68004 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:44:46.809792   68004 ssh_runner.go:195] Run: crio --version
	I1009 18:44:46.837446   68004 ssh_runner.go:195] Run: crio --version
	I1009 18:44:46.867516   68004 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:44:46.869002   68004 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:44:46.886298   68004 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:44:46.890354   68004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:44:46.901206   68004 kubeadm.go:883] updating cluster {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:44:46.901331   68004 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:44:46.901390   68004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:44:46.933183   68004 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:44:46.933203   68004 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:44:46.933255   68004 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:44:46.959025   68004 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:44:46.959053   68004 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:44:46.959062   68004 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 18:44:46.959174   68004 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-608611 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:44:46.959248   68004 ssh_runner.go:195] Run: crio config
	I1009 18:44:47.005223   68004 cni.go:84] Creating CNI manager for ""
	I1009 18:44:47.005245   68004 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 18:44:47.005269   68004 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:44:47.005302   68004 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-608611 NodeName:ha-608611 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:44:47.005420   68004 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-608611"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:44:47.005441   68004 kube-vip.go:115] generating kube-vip config ...
	I1009 18:44:47.005483   68004 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1009 18:44:47.017646   68004 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:44:47.017751   68004 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1009 18:44:47.017813   68004 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:44:47.025763   68004 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:44:47.025815   68004 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 18:44:47.033769   68004 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 18:44:47.046390   68004 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:44:47.062352   68004 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 18:44:47.075248   68004 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1009 18:44:47.090154   68004 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1009 18:44:47.093985   68004 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:44:47.104234   68004 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:44:47.185443   68004 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:44:47.207477   68004 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611 for IP: 192.168.49.2
	I1009 18:44:47.207503   68004 certs.go:195] generating shared ca certs ...
	I1009 18:44:47.207525   68004 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.207676   68004 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 18:44:47.207726   68004 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 18:44:47.207736   68004 certs.go:257] generating profile certs ...
	I1009 18:44:47.207784   68004 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key
	I1009 18:44:47.207802   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt with IP's: []
	I1009 18:44:47.296415   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt ...
	I1009 18:44:47.296444   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt: {Name:mka7495c49ff81b322387640c5f8be05bb8b97aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.296615   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key ...
	I1009 18:44:47.296627   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key: {Name:mk151a9783426d352762013576861912ee213cd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.296698   68004 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3
	I1009 18:44:47.296712   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1009 18:44:47.614912   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 ...
	I1009 18:44:47.614937   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3: {Name:mkf40b70da82ca6969886952002da4a653b30ed7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.615095   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3 ...
	I1009 18:44:47.615110   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3: {Name:mkd83b705c3cec74b71d7424d9484d8c52a44a8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.615192   68004 certs.go:382] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.4ab867b3 -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt
	I1009 18:44:47.615283   68004 certs.go:386] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.4ab867b3 -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key
	I1009 18:44:47.615388   68004 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key
	I1009 18:44:47.615408   68004 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt with IP's: []
	I1009 18:44:47.855559   68004 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt ...
	I1009 18:44:47.855590   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt: {Name:mkb45be1e91a0e10b00b60bd353288b3ec0a365b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.855750   68004 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key ...
	I1009 18:44:47.855762   68004 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key: {Name:mk173c05f4fc9659f1f76c6f2e2f3e956fd65bbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:44:47.855826   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 18:44:47.855839   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 18:44:47.855850   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 18:44:47.855863   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 18:44:47.855878   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 18:44:47.855890   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 18:44:47.855902   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 18:44:47.855914   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 18:44:47.855955   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 18:44:47.855989   68004 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 18:44:47.855998   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:44:47.856027   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 18:44:47.856050   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:44:47.856071   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 18:44:47.856108   68004 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:44:47.856132   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:47.856159   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem -> /usr/share/ca-certificates/14880.pem
	I1009 18:44:47.856171   68004 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /usr/share/ca-certificates/148802.pem
	I1009 18:44:47.856652   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:44:47.875170   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:44:47.892939   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:44:47.910593   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:44:47.927971   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 18:44:47.945367   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:44:47.962453   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:44:47.979768   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:44:47.996498   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:44:48.015667   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 18:44:48.032775   68004 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 18:44:48.049777   68004 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:44:48.062232   68004 ssh_runner.go:195] Run: openssl version
	I1009 18:44:48.068333   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 18:44:48.076746   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.080306   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.080361   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 18:44:48.114497   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:44:48.123987   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:44:48.134109   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.138265   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.138325   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:44:48.173947   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:44:48.182505   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 18:44:48.190879   68004 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.194449   68004 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.194493   68004 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 18:44:48.227813   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 18:44:48.236520   68004 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:44:48.239954   68004 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 18:44:48.240015   68004 kubeadm.go:400] StartCluster: {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:44:48.240093   68004 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:44:48.240133   68004 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:44:48.266457   68004 cri.go:89] found id: ""
	I1009 18:44:48.266520   68004 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:44:48.274981   68004 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:44:48.282927   68004 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:44:48.282975   68004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:44:48.290558   68004 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:44:48.290617   68004 kubeadm.go:157] found existing configuration files:
	
	I1009 18:44:48.290662   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:44:48.297883   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:44:48.297940   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:44:48.305298   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:44:48.312630   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:44:48.312685   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:44:48.320277   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:44:48.328028   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:44:48.328075   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:44:48.335714   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:44:48.343631   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:44:48.343682   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:44:48.351389   68004 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:44:48.409985   68004 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:44:48.468687   68004 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:48:52.176412   68004 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1009 18:48:52.176606   68004 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:48:52.179343   68004 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:48:52.179469   68004 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:48:52.179692   68004 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:48:52.179825   68004 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:48:52.179919   68004 kubeadm.go:318] OS: Linux
	I1009 18:48:52.180033   68004 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:48:52.180167   68004 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:48:52.180261   68004 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:48:52.180339   68004 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:48:52.180423   68004 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:48:52.180506   68004 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:48:52.180585   68004 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:48:52.180650   68004 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:48:52.180730   68004 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:48:52.180858   68004 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:48:52.181038   68004 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:48:52.181129   68004 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:48:52.183066   68004 out.go:252]   - Generating certificates and keys ...
	I1009 18:48:52.183199   68004 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:48:52.183278   68004 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:48:52.183337   68004 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 18:48:52.183388   68004 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 18:48:52.183456   68004 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 18:48:52.183531   68004 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 18:48:52.183609   68004 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 18:48:52.183734   68004 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:48:52.183814   68004 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 18:48:52.183946   68004 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:48:52.184022   68004 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 18:48:52.184077   68004 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 18:48:52.184120   68004 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 18:48:52.184209   68004 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:48:52.184289   68004 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:48:52.184373   68004 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:48:52.184446   68004 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:48:52.184545   68004 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:48:52.184650   68004 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:48:52.184751   68004 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:48:52.184845   68004 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:48:52.187212   68004 out.go:252]   - Booting up control plane ...
	I1009 18:48:52.187314   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:48:52.187403   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:48:52.187495   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:48:52.187618   68004 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:48:52.187764   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:48:52.187905   68004 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:48:52.188016   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:48:52.188092   68004 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:48:52.188271   68004 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:48:52.188367   68004 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:48:52.188438   68004 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001064091s
	I1009 18:48:52.188532   68004 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:48:52.188631   68004 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 18:48:52.188753   68004 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:48:52.188835   68004 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:48:52.188944   68004 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.00065849s
	I1009 18:48:52.189053   68004 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000822023s
	I1009 18:48:52.189176   68004 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.00103559s
	I1009 18:48:52.189186   68004 kubeadm.go:318] 
	I1009 18:48:52.189288   68004 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:48:52.189417   68004 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:48:52.189507   68004 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:48:52.189604   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:48:52.189710   68004 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:48:52.189827   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:48:52.189851   68004 kubeadm.go:318] 
	W1009 18:48:52.189997   68004 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-608611 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001064091s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.00065849s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000822023s
	[control-plane-check] kube-scheduler is not healthy after 4m0.00103559s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 18:48:52.190074   68004 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 18:48:54.957990   68004 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.767888592s)
	I1009 18:48:54.958062   68004 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:48:54.971165   68004 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:48:54.971216   68004 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:48:54.979630   68004 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:48:54.979649   68004 kubeadm.go:157] found existing configuration files:
	
	I1009 18:48:54.979696   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:48:54.987819   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:48:54.987884   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:48:54.995953   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:48:55.003976   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:48:55.004081   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:48:55.011851   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:48:55.019991   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:48:55.020043   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:48:55.027959   68004 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:48:55.036070   68004 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:48:55.036117   68004 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:48:55.043823   68004 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:48:55.102132   68004 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 18:48:55.161990   68004 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:52:58.820119   68004 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 18:52:58.820247   68004 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 18:52:58.823463   68004 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 18:52:58.823551   68004 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 18:52:58.823686   68004 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:52:58.823770   68004 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 18:52:58.823834   68004 kubeadm.go:318] OS: Linux
	I1009 18:52:58.823882   68004 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 18:52:58.823967   68004 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 18:52:58.824039   68004 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 18:52:58.824112   68004 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 18:52:58.824209   68004 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 18:52:58.824278   68004 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 18:52:58.824339   68004 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 18:52:58.824385   68004 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 18:52:58.824446   68004 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:52:58.824525   68004 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:52:58.824621   68004 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:52:58.824718   68004 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:52:58.828177   68004 out.go:252]   - Generating certificates and keys ...
	I1009 18:52:58.828267   68004 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 18:52:58.828359   68004 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 18:52:58.828476   68004 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 18:52:58.828530   68004 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 18:52:58.828586   68004 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 18:52:58.828629   68004 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 18:52:58.828684   68004 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 18:52:58.828737   68004 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 18:52:58.828800   68004 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 18:52:58.828859   68004 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 18:52:58.828890   68004 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 18:52:58.828973   68004 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:52:58.829058   68004 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:52:58.829168   68004 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:52:58.829228   68004 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:52:58.829307   68004 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:52:58.829375   68004 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:52:58.829446   68004 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:52:58.829507   68004 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:52:58.830918   68004 out.go:252]   - Booting up control plane ...
	I1009 18:52:58.831004   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:52:58.831088   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:52:58.831162   68004 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:52:58.831271   68004 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:52:58.831374   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 18:52:58.831475   68004 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 18:52:58.831547   68004 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:52:58.831602   68004 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 18:52:58.831715   68004 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:52:58.831812   68004 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:52:58.831876   68004 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000946171s
	I1009 18:52:58.831960   68004 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 18:52:58.832028   68004 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1009 18:52:58.832113   68004 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 18:52:58.832207   68004 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 18:52:58.832277   68004 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	I1009 18:52:58.832347   68004 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	I1009 18:52:58.832422   68004 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	I1009 18:52:58.832428   68004 kubeadm.go:318] 
	I1009 18:52:58.832506   68004 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 18:52:58.832579   68004 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 18:52:58.832656   68004 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 18:52:58.832741   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 18:52:58.832805   68004 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 18:52:58.832888   68004 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 18:52:58.832970   68004 kubeadm.go:402] duration metric: took 8m10.592960723s to StartCluster
	I1009 18:52:58.832981   68004 kubeadm.go:318] 
	I1009 18:52:58.833031   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:52:58.833085   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:52:58.861225   68004 cri.go:89] found id: ""
	I1009 18:52:58.861266   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.861281   68004 logs.go:284] No container was found matching "kube-apiserver"
	I1009 18:52:58.861287   68004 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:52:58.861341   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:52:58.888167   68004 cri.go:89] found id: ""
	I1009 18:52:58.888195   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.888205   68004 logs.go:284] No container was found matching "etcd"
	I1009 18:52:58.888212   68004 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:52:58.888287   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:52:58.914349   68004 cri.go:89] found id: ""
	I1009 18:52:58.914374   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.914384   68004 logs.go:284] No container was found matching "coredns"
	I1009 18:52:58.914390   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:52:58.914453   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:52:58.940856   68004 cri.go:89] found id: ""
	I1009 18:52:58.940884   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.940892   68004 logs.go:284] No container was found matching "kube-scheduler"
	I1009 18:52:58.940898   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:52:58.940949   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:52:58.967634   68004 cri.go:89] found id: ""
	I1009 18:52:58.967660   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.967668   68004 logs.go:284] No container was found matching "kube-proxy"
	I1009 18:52:58.967675   68004 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:52:58.967737   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:52:58.994857   68004 cri.go:89] found id: ""
	I1009 18:52:58.994884   68004 logs.go:282] 0 containers: []
	W1009 18:52:58.994892   68004 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 18:52:58.994897   68004 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:52:58.994951   68004 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:52:59.022250   68004 cri.go:89] found id: ""
	I1009 18:52:59.022280   68004 logs.go:282] 0 containers: []
	W1009 18:52:59.022296   68004 logs.go:284] No container was found matching "kindnet"
	I1009 18:52:59.022305   68004 logs.go:123] Gathering logs for container status ...
	I1009 18:52:59.022316   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:52:59.050362   68004 logs.go:123] Gathering logs for kubelet ...
	I1009 18:52:59.050466   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 18:52:59.114521   68004 logs.go:123] Gathering logs for dmesg ...
	I1009 18:52:59.114560   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:52:59.126721   68004 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:52:59.126746   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 18:52:59.184497   68004 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:52:59.177217    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.177807    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179451    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179888    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.181458    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 18:52:59.177217    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.177807    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179451    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.179888    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:52:59.181458    2549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 18:52:59.184526   68004 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:52:59.184536   68004 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1009 18:52:59.243650   68004 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 18:52:59.243716   68004 out.go:285] * 
	W1009 18:52:59.243784   68004 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:52:59.243799   68004 out.go:285] * 
	W1009 18:52:59.245479   68004 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 18:52:59.249165   68004 out.go:203] 
	W1009 18:52:59.250590   68004 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000946171s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000360275s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000521648s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000677169s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 18:52:59.250620   68004 out.go:285] * 
	I1009 18:52:59.252112   68004 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 18:55:43 ha-608611 crio[779]: time="2025-10-09T18:55:43.464882354Z" level=info msg="createCtr: removing container a4e348fe8c0634be7997fa851a9d7874ff42e4fcc6b6ded7430d4d440fa54d76" id=c730575e-275c-47fb-891e-f453e9c771f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:55:43 ha-608611 crio[779]: time="2025-10-09T18:55:43.464921474Z" level=info msg="createCtr: deleting container a4e348fe8c0634be7997fa851a9d7874ff42e4fcc6b6ded7430d4d440fa54d76 from storage" id=c730575e-275c-47fb-891e-f453e9c771f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:55:43 ha-608611 crio[779]: time="2025-10-09T18:55:43.466974765Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-608611_kube-system_8c1c5aee1432fcfd0e6519753fb0d668_0" id=c730575e-275c-47fb-891e-f453e9c771f0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:55:44 ha-608611 crio[779]: time="2025-10-09T18:55:44.441682973Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=a5e334c8-b59c-4949-8280-bc0330f89259 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:55:44 ha-608611 crio[779]: time="2025-10-09T18:55:44.442634699Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=49f09a5b-0988-40a6-8dbe-9b4e726fc2f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:55:44 ha-608611 crio[779]: time="2025-10-09T18:55:44.443494892Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-608611/kube-controller-manager" id=4b96f54f-2e05-441f-b47e-c327f8279d72 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:55:44 ha-608611 crio[779]: time="2025-10-09T18:55:44.443724316Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:55:44 ha-608611 crio[779]: time="2025-10-09T18:55:44.448307137Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:55:44 ha-608611 crio[779]: time="2025-10-09T18:55:44.448750659Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:55:44 ha-608611 crio[779]: time="2025-10-09T18:55:44.45910449Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=4b96f54f-2e05-441f-b47e-c327f8279d72 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:55:44 ha-608611 crio[779]: time="2025-10-09T18:55:44.460482171Z" level=info msg="createCtr: deleting container ID c9001fcb027033e1331eceba04cac65f617fa86b27c007794e067cfc50667267 from idIndex" id=4b96f54f-2e05-441f-b47e-c327f8279d72 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:55:44 ha-608611 crio[779]: time="2025-10-09T18:55:44.460524339Z" level=info msg="createCtr: removing container c9001fcb027033e1331eceba04cac65f617fa86b27c007794e067cfc50667267" id=4b96f54f-2e05-441f-b47e-c327f8279d72 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:55:44 ha-608611 crio[779]: time="2025-10-09T18:55:44.46056105Z" level=info msg="createCtr: deleting container c9001fcb027033e1331eceba04cac65f617fa86b27c007794e067cfc50667267 from storage" id=4b96f54f-2e05-441f-b47e-c327f8279d72 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:55:44 ha-608611 crio[779]: time="2025-10-09T18:55:44.462856948Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-608611_kube-system_cc9d45d79042caf53449ab6317965aad_0" id=4b96f54f-2e05-441f-b47e-c327f8279d72 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:55:45 ha-608611 crio[779]: time="2025-10-09T18:55:45.441823652Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=3d8d1ad8-98ac-4d1e-8a62-d0ad6bcb87cc name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:55:45 ha-608611 crio[779]: time="2025-10-09T18:55:45.442774075Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=f416138d-dafc-4e45-bf78-fcdb32f294bc name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:55:45 ha-608611 crio[779]: time="2025-10-09T18:55:45.443838294Z" level=info msg="Creating container: kube-system/etcd-ha-608611/etcd" id=3b3b1eb1-6503-41f9-9067-10b66b18c1d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:55:45 ha-608611 crio[779]: time="2025-10-09T18:55:45.444076881Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:55:45 ha-608611 crio[779]: time="2025-10-09T18:55:45.44760933Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:55:45 ha-608611 crio[779]: time="2025-10-09T18:55:45.448010275Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 18:55:45 ha-608611 crio[779]: time="2025-10-09T18:55:45.46212325Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=3b3b1eb1-6503-41f9-9067-10b66b18c1d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:55:45 ha-608611 crio[779]: time="2025-10-09T18:55:45.463589803Z" level=info msg="createCtr: deleting container ID a4243b9fc2ff5a3b52364f432e079c1219c072ac4aec3ec9d7bd92feb98febfd from idIndex" id=3b3b1eb1-6503-41f9-9067-10b66b18c1d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:55:45 ha-608611 crio[779]: time="2025-10-09T18:55:45.463626666Z" level=info msg="createCtr: removing container a4243b9fc2ff5a3b52364f432e079c1219c072ac4aec3ec9d7bd92feb98febfd" id=3b3b1eb1-6503-41f9-9067-10b66b18c1d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:55:45 ha-608611 crio[779]: time="2025-10-09T18:55:45.463666523Z" level=info msg="createCtr: deleting container a4243b9fc2ff5a3b52364f432e079c1219c072ac4aec3ec9d7bd92feb98febfd from storage" id=3b3b1eb1-6503-41f9-9067-10b66b18c1d5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 18:55:45 ha-608611 crio[779]: time="2025-10-09T18:55:45.4659086Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-608611_kube-system_b479c8e1034fd1754049af8325a8c50b_0" id=3b3b1eb1-6503-41f9-9067-10b66b18c1d5 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 18:55:47.089946    4780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:55:47.090478    4780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:55:47.092106    4780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:55:47.092589    4780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 18:55:47.094250    4780 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 18:55:47 up  1:38,  0 user,  load average: 0.27, 0.15, 0.11
	Linux ha-608611 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 18:55:43 ha-608611 kubelet[1930]: E1009 18:55:43.441609    1930 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 18:55:43 ha-608611 kubelet[1930]: E1009 18:55:43.467340    1930 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:55:43 ha-608611 kubelet[1930]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:55:43 ha-608611 kubelet[1930]:  > podSandboxID="3ed86e3854bad44d01adb07f49466fff61fdf9dd10f223587d539b2547828b70"
	Oct 09 18:55:43 ha-608611 kubelet[1930]: E1009 18:55:43.467459    1930 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:55:43 ha-608611 kubelet[1930]:         container kube-apiserver start failed in pod kube-apiserver-ha-608611_kube-system(8c1c5aee1432fcfd0e6519753fb0d668): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:55:43 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:55:43 ha-608611 kubelet[1930]: E1009 18:55:43.467501    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-608611" podUID="8c1c5aee1432fcfd0e6519753fb0d668"
	Oct 09 18:55:44 ha-608611 kubelet[1930]: E1009 18:55:44.441244    1930 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 18:55:44 ha-608611 kubelet[1930]: E1009 18:55:44.463177    1930 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:55:44 ha-608611 kubelet[1930]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:55:44 ha-608611 kubelet[1930]:  > podSandboxID="2ef2b90afa617b399f6036f17dc5f1152d378da5043adff2fc3afde192bc8693"
	Oct 09 18:55:44 ha-608611 kubelet[1930]: E1009 18:55:44.463304    1930 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:55:44 ha-608611 kubelet[1930]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-608611_kube-system(cc9d45d79042caf53449ab6317965aad): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:55:44 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:55:44 ha-608611 kubelet[1930]: E1009 18:55:44.463348    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-608611" podUID="cc9d45d79042caf53449ab6317965aad"
	Oct 09 18:55:45 ha-608611 kubelet[1930]: E1009 18:55:45.441359    1930 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 18:55:45 ha-608611 kubelet[1930]: E1009 18:55:45.466231    1930 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 18:55:45 ha-608611 kubelet[1930]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:55:45 ha-608611 kubelet[1930]:  > podSandboxID="85e631b34b7cd8e30736ecbe7d81581bf5cedb0c5abd8815458e28a54592f51e"
	Oct 09 18:55:45 ha-608611 kubelet[1930]: E1009 18:55:45.466335    1930 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 18:55:45 ha-608611 kubelet[1930]:         container etcd start failed in pod etcd-ha-608611_kube-system(b479c8e1034fd1754049af8325a8c50b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 18:55:45 ha-608611 kubelet[1930]:  > logger="UnhandledError"
	Oct 09 18:55:45 ha-608611 kubelet[1930]: E1009 18:55:45.466365    1930 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-608611" podUID="b479c8e1034fd1754049af8325a8c50b"
	Oct 09 18:55:46 ha-608611 kubelet[1930]: E1009 18:55:46.159260    1930 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-608611.186ce72dd5388d27  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-608611,UID:ha-608611,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-608611 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-608611,},FirstTimestamp:2025-10-09 18:48:58.431819047 +0000 UTC m=+0.618197321,LastTimestamp:2025-10-09 18:48:58.431819047 +0000 UTC m=+0.618197321,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-608611,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611: exit status 6 (297.032507ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:55:47.469859   80973 status.go:458] kubeconfig endpoint: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-608611" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (370.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-608611 stop --alsologtostderr -v 5: (1.210685594s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 start --wait true --alsologtostderr -v 5
E1009 19:00:34.610187   14880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 start --wait true --alsologtostderr -v 5: exit status 80 (6m7.588828128s)

                                                
                                                
-- stdout --
	* [ha-608611] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-608611" primary control-plane node in "ha-608611" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:55:48.782369   81326 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:55:48.782604   81326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:55:48.782612   81326 out.go:374] Setting ErrFile to fd 2...
	I1009 18:55:48.782616   81326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:55:48.782782   81326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:55:48.783226   81326 out.go:368] Setting JSON to false
	I1009 18:55:48.784053   81326 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5897,"bootTime":1760030252,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:55:48.784156   81326 start.go:141] virtualization: kvm guest
	I1009 18:55:48.786563   81326 out.go:179] * [ha-608611] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:55:48.788077   81326 notify.go:220] Checking for updates...
	I1009 18:55:48.788126   81326 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:55:48.789665   81326 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:55:48.791095   81326 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:55:48.792613   81326 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:55:48.794226   81326 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:55:48.795794   81326 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:55:48.797638   81326 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:55:48.797748   81326 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:55:48.820855   81326 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:55:48.820923   81326 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:55:48.876094   81326 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:55:48.866734643 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:55:48.876204   81326 docker.go:318] overlay module found
	I1009 18:55:48.877913   81326 out.go:179] * Using the docker driver based on existing profile
	I1009 18:55:48.879222   81326 start.go:305] selected driver: docker
	I1009 18:55:48.879244   81326 start.go:925] validating driver "docker" against &{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:55:48.879315   81326 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:55:48.879420   81326 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:55:48.933369   81326 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:55:48.924148795 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:55:48.933987   81326 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:55:48.934014   81326 cni.go:84] Creating CNI manager for ""
	I1009 18:55:48.934075   81326 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 18:55:48.934183   81326 start.go:349] cluster config:
	{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1009 18:55:48.936388   81326 out.go:179] * Starting "ha-608611" primary control-plane node in "ha-608611" cluster
	I1009 18:55:48.937951   81326 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:55:48.939231   81326 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:55:48.940352   81326 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:55:48.940388   81326 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:55:48.940398   81326 cache.go:64] Caching tarball of preloaded images
	I1009 18:55:48.940435   81326 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:55:48.940519   81326 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:55:48.940534   81326 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:55:48.940631   81326 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:55:48.960098   81326 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 18:55:48.960121   81326 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 18:55:48.960153   81326 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:55:48.960177   81326 start.go:360] acquireMachinesLock for ha-608611: {Name:mk7579977ab708dc80cadd5f1683dbd9d0a08d4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:55:48.960231   81326 start.go:364] duration metric: took 36.84µs to acquireMachinesLock for "ha-608611"
	I1009 18:55:48.960251   81326 start.go:96] Skipping create...Using existing machine configuration
	I1009 18:55:48.960256   81326 fix.go:54] fixHost starting: 
	I1009 18:55:48.960457   81326 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:55:48.977497   81326 fix.go:112] recreateIfNeeded on ha-608611: state=Stopped err=<nil>
	W1009 18:55:48.977523   81326 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 18:55:48.979512   81326 out.go:252] * Restarting existing docker container for "ha-608611" ...
	I1009 18:55:48.979585   81326 cli_runner.go:164] Run: docker start ha-608611
	I1009 18:55:49.217604   81326 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:55:49.237615   81326 kic.go:430] container "ha-608611" state is running.
	I1009 18:55:49.238028   81326 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:55:49.257124   81326 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:55:49.257381   81326 machine.go:93] provisionDockerMachine start ...
	I1009 18:55:49.257452   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:49.276711   81326 main.go:141] libmachine: Using SSH client type: native
	I1009 18:55:49.276957   81326 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 18:55:49.276972   81326 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:55:49.277652   81326 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45654->127.0.0.1:32788: read: connection reset by peer
	I1009 18:55:52.425271   81326 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:55:52.425302   81326 ubuntu.go:182] provisioning hostname "ha-608611"
	I1009 18:55:52.425356   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:52.443305   81326 main.go:141] libmachine: Using SSH client type: native
	I1009 18:55:52.443509   81326 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 18:55:52.443521   81326 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-608611 && echo "ha-608611" | sudo tee /etc/hostname
	I1009 18:55:52.597559   81326 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:55:52.597633   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:52.615251   81326 main.go:141] libmachine: Using SSH client type: native
	I1009 18:55:52.615459   81326 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 18:55:52.615476   81326 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-608611' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-608611/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-608611' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:55:52.760759   81326 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:55:52.760787   81326 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 18:55:52.760833   81326 ubuntu.go:190] setting up certificates
	I1009 18:55:52.760848   81326 provision.go:84] configureAuth start
	I1009 18:55:52.760892   81326 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:55:52.778450   81326 provision.go:143] copyHostCerts
	I1009 18:55:52.778486   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:55:52.778529   81326 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 18:55:52.778546   81326 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:55:52.778622   81326 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 18:55:52.778743   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:55:52.778772   81326 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 18:55:52.778782   81326 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:55:52.778825   81326 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 18:55:52.778905   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:55:52.778928   81326 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 18:55:52.778938   81326 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:55:52.778979   81326 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 18:55:52.779124   81326 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.ha-608611 san=[127.0.0.1 192.168.49.2 ha-608611 localhost minikube]
	I1009 18:55:52.921150   81326 provision.go:177] copyRemoteCerts
	I1009 18:55:52.921251   81326 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:55:52.921302   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:52.938746   81326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:53.041424   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 18:55:53.041487   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 18:55:53.059403   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 18:55:53.059465   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 18:55:53.077545   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 18:55:53.077599   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:55:53.095069   81326 provision.go:87] duration metric: took 334.207036ms to configureAuth
	I1009 18:55:53.095112   81326 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:55:53.095285   81326 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:55:53.095376   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:53.113012   81326 main.go:141] libmachine: Using SSH client type: native
	I1009 18:55:53.113249   81326 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 18:55:53.113266   81326 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:55:53.371650   81326 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:55:53.371676   81326 machine.go:96] duration metric: took 4.114278074s to provisionDockerMachine
	I1009 18:55:53.371688   81326 start.go:293] postStartSetup for "ha-608611" (driver="docker")
	I1009 18:55:53.371701   81326 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:55:53.371771   81326 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:55:53.371842   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:53.390223   81326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:53.493994   81326 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:55:53.497842   81326 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:55:53.497867   81326 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:55:53.497877   81326 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 18:55:53.497926   81326 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 18:55:53.498003   81326 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 18:55:53.498014   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /etc/ssl/certs/148802.pem
	I1009 18:55:53.498111   81326 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:55:53.506094   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:55:53.524346   81326 start.go:296] duration metric: took 152.640721ms for postStartSetup
	I1009 18:55:53.524419   81326 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:55:53.524480   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:53.542600   81326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:53.642517   81326 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:55:53.646989   81326 fix.go:56] duration metric: took 4.686726649s for fixHost
	I1009 18:55:53.647050   81326 start.go:83] releasing machines lock for "ha-608611", held for 4.686806047s
	I1009 18:55:53.647103   81326 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:55:53.665515   81326 ssh_runner.go:195] Run: cat /version.json
	I1009 18:55:53.665578   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:53.665620   81326 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:55:53.665678   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:53.684362   81326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:53.684684   81326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:53.836250   81326 ssh_runner.go:195] Run: systemctl --version
	I1009 18:55:53.842642   81326 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:55:53.877786   81326 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:55:53.882350   81326 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:55:53.882415   81326 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:55:53.890015   81326 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 18:55:53.890039   81326 start.go:495] detecting cgroup driver to use...
	I1009 18:55:53.890072   81326 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:55:53.890126   81326 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:55:53.903830   81326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:55:53.915636   81326 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:55:53.915680   81326 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:55:53.929373   81326 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:55:53.941718   81326 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:55:54.017230   81326 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:55:54.097019   81326 docker.go:234] disabling docker service ...
	I1009 18:55:54.097119   81326 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:55:54.110968   81326 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:55:54.123470   81326 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:55:54.198047   81326 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:55:54.273477   81326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:55:54.285686   81326 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:55:54.299501   81326 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:55:54.299553   81326 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:55:54.307932   81326 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 18:55:54.307990   81326 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:55:54.316516   81326 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:55:54.324850   81326 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:55:54.333127   81326 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:55:54.340857   81326 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:55:54.349439   81326 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:55:54.357872   81326 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:55:54.367094   81326 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:55:54.374845   81326 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:55:54.382734   81326 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:55:54.461355   81326 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:55:54.565572   81326 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:55:54.565624   81326 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:55:54.571180   81326 start.go:563] Will wait 60s for crictl version
	I1009 18:55:54.571234   81326 ssh_runner.go:195] Run: which crictl
	I1009 18:55:54.574912   81326 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:55:54.598972   81326 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:55:54.599070   81326 ssh_runner.go:195] Run: crio --version
	I1009 18:55:54.626916   81326 ssh_runner.go:195] Run: crio --version
	I1009 18:55:54.656626   81326 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:55:54.658243   81326 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:55:54.675913   81326 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:55:54.680110   81326 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:55:54.690492   81326 kubeadm.go:883] updating cluster {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:55:54.690604   81326 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:55:54.690644   81326 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:55:54.722701   81326 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:55:54.722720   81326 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:55:54.722761   81326 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:55:54.747850   81326 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:55:54.747875   81326 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:55:54.747882   81326 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 18:55:54.748003   81326 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-608611 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:55:54.748077   81326 ssh_runner.go:195] Run: crio config
	I1009 18:55:54.792222   81326 cni.go:84] Creating CNI manager for ""
	I1009 18:55:54.792240   81326 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 18:55:54.792253   81326 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:55:54.792274   81326 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-608611 NodeName:ha-608611 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:55:54.792387   81326 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-608611"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:55:54.792445   81326 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:55:54.800546   81326 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:55:54.800612   81326 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:55:54.808306   81326 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 18:55:54.820571   81326 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:55:54.832686   81326 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 18:55:54.845124   81326 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 18:55:54.848713   81326 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:55:54.858608   81326 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:55:54.936048   81326 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:55:54.960660   81326 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611 for IP: 192.168.49.2
	I1009 18:55:54.960682   81326 certs.go:195] generating shared ca certs ...
	I1009 18:55:54.960703   81326 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:55:54.960866   81326 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 18:55:54.960929   81326 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 18:55:54.960943   81326 certs.go:257] generating profile certs ...
	I1009 18:55:54.961058   81326 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key
	I1009 18:55:54.961104   81326 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.71ac3d0a
	I1009 18:55:54.961152   81326 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.71ac3d0a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1009 18:55:55.543578   81326 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.71ac3d0a ...
	I1009 18:55:55.543608   81326 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.71ac3d0a: {Name:mk997984d16894bde965cc8b9fac1d81fe6f4952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:55:55.543774   81326 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.71ac3d0a ...
	I1009 18:55:55.543787   81326 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.71ac3d0a: {Name:mk0466ac68a27af88f893685594376a4479a0b52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:55:55.543856   81326 certs.go:382] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.71ac3d0a -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt
	I1009 18:55:55.543984   81326 certs.go:386] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.71ac3d0a -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key
	I1009 18:55:55.544117   81326 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key
	I1009 18:55:55.544131   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 18:55:55.544165   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 18:55:55.544184   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 18:55:55.544201   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 18:55:55.544214   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 18:55:55.544227   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 18:55:55.544240   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 18:55:55.544255   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 18:55:55.544302   81326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 18:55:55.544330   81326 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 18:55:55.544341   81326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:55:55.544368   81326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 18:55:55.544389   81326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:55:55.544410   81326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 18:55:55.544447   81326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:55:55.544473   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:55:55.544487   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem -> /usr/share/ca-certificates/14880.pem
	I1009 18:55:55.544500   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /usr/share/ca-certificates/148802.pem
	I1009 18:55:55.545009   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:55:55.563316   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:55:55.580320   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:55:55.597589   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:55:55.614398   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1009 18:55:55.631965   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:55:55.648792   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:55:55.666094   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:55:55.683111   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:55:55.700010   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 18:55:55.717340   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 18:55:55.734654   81326 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:55:55.747411   81326 ssh_runner.go:195] Run: openssl version
	I1009 18:55:55.753470   81326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:55:55.761715   81326 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:55:55.765434   81326 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:55:55.765492   81326 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:55:55.798918   81326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:55:55.807085   81326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 18:55:55.815823   81326 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 18:55:55.819621   81326 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:55:55.819677   81326 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 18:55:55.854342   81326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 18:55:55.862610   81326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 18:55:55.870964   81326 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 18:55:55.874789   81326 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:55:55.874839   81326 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 18:55:55.909615   81326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:55:55.918204   81326 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:55:55.922181   81326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 18:55:55.956689   81326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 18:55:55.991888   81326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 18:55:56.025768   81326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 18:55:56.066085   81326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 18:55:56.107192   81326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 18:55:56.142373   81326 kubeadm.go:400] StartCluster: {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:55:56.142453   81326 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:55:56.142506   81326 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:55:56.169309   81326 cri.go:89] found id: ""
	I1009 18:55:56.169373   81326 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:55:56.177273   81326 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 18:55:56.177294   81326 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 18:55:56.177352   81326 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 18:55:56.184818   81326 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:55:56.185183   81326 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:55:56.185297   81326 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-11374/kubeconfig needs updating (will repair): [kubeconfig missing "ha-608611" cluster setting kubeconfig missing "ha-608611" context setting]
	I1009 18:55:56.185607   81326 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/kubeconfig: {Name:mke7bf8fc0811179129dfd61e3a963860adf8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:55:56.186078   81326 kapi.go:59] client config for ha-608611: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 18:55:56.186554   81326 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 18:55:56.186572   81326 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 18:55:56.186576   81326 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 18:55:56.186579   81326 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 18:55:56.186582   81326 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 18:55:56.186644   81326 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 18:55:56.186913   81326 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 18:55:56.194885   81326 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 18:55:56.194918   81326 kubeadm.go:601] duration metric: took 17.618968ms to restartPrimaryControlPlane
	I1009 18:55:56.194926   81326 kubeadm.go:402] duration metric: took 52.565569ms to StartCluster
	I1009 18:55:56.194954   81326 settings.go:142] acquiring lock: {Name:mke1fc24bd3c282bdce5b5999d4611ed242ead0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:55:56.195014   81326 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:55:56.195534   81326 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/kubeconfig: {Name:mke7bf8fc0811179129dfd61e3a963860adf8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:55:56.195769   81326 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:55:56.195852   81326 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 18:55:56.195922   81326 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:55:56.195932   81326 addons.go:69] Setting default-storageclass=true in profile "ha-608611"
	I1009 18:55:56.195965   81326 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-608611"
	I1009 18:55:56.195925   81326 addons.go:69] Setting storage-provisioner=true in profile "ha-608611"
	I1009 18:55:56.195992   81326 addons.go:238] Setting addon storage-provisioner=true in "ha-608611"
	I1009 18:55:56.196019   81326 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:55:56.196264   81326 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:55:56.196400   81326 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:55:56.199213   81326 out.go:179] * Verifying Kubernetes components...
	I1009 18:55:56.200413   81326 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:55:56.216177   81326 kapi.go:59] client config for ha-608611: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 18:55:56.216605   81326 addons.go:238] Setting addon default-storageclass=true in "ha-608611"
	I1009 18:55:56.216648   81326 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:55:56.217159   81326 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:55:56.217419   81326 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 18:55:56.219196   81326 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:55:56.219223   81326 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 18:55:56.219282   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:56.243943   81326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:56.245925   81326 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 18:55:56.245944   81326 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 18:55:56.245984   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:56.263872   81326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:56.303542   81326 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:55:56.316831   81326 node_ready.go:35] waiting up to 6m0s for node "ha-608611" to be "Ready" ...
	I1009 18:55:56.352305   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:55:56.372260   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:55:56.412054   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:56.412090   81326 retry.go:31] will retry after 210.547469ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:55:56.427954   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:56.427986   81326 retry.go:31] will retry after 365.761186ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:56.623265   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:55:56.675568   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:56.675605   81326 retry.go:31] will retry after 331.492885ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:56.794903   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:55:56.846158   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:56.846190   81326 retry.go:31] will retry after 366.903412ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.007285   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:55:57.058254   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.058285   81326 retry.go:31] will retry after 440.442086ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.213614   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:55:57.266588   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.266622   81326 retry.go:31] will retry after 403.844371ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.499702   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:55:57.552130   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.552176   81326 retry.go:31] will retry after 1.153605517s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.671430   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:55:57.724158   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.724189   81326 retry.go:31] will retry after 1.186791372s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:55:58.317829   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:55:58.706293   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:55:58.758710   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:58.758743   81326 retry.go:31] will retry after 1.743017897s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:58.911763   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:55:58.963731   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:58.963764   81326 retry.go:31] will retry after 777.451228ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:59.742307   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:55:59.794404   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:59.794436   81326 retry.go:31] will retry after 1.290318475s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:00.318311   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:00.502629   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:56:00.555745   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:00.555777   81326 retry.go:31] will retry after 2.524197607s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:01.084941   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:56:01.136443   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:01.136470   81326 retry.go:31] will retry after 1.577041718s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:02.713959   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:56:02.768944   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:02.769029   81326 retry.go:31] will retry after 2.739822337s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:02.817505   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:03.080936   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:56:03.135285   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:03.135312   81326 retry.go:31] will retry after 2.274306578s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:04.818421   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:05.409777   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:56:05.464614   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:05.464653   81326 retry.go:31] will retry after 2.562562636s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:05.509838   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:56:05.563089   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:05.563116   81326 retry.go:31] will retry after 7.257063106s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:07.317778   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:08.028172   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:56:08.085551   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:08.085584   81326 retry.go:31] will retry after 5.304285212s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:09.817756   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:12.317749   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:12.820933   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:56:12.874853   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:12.874882   81326 retry.go:31] will retry after 14.146267666s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:13.390058   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:56:13.445661   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:13.445690   81326 retry.go:31] will retry after 12.009663375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:14.317787   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:16.817710   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:19.317672   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:21.817409   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:23.817676   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:25.455892   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:56:25.508571   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:25.508602   81326 retry.go:31] will retry after 16.328819921s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:26.317511   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:27.021826   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:56:27.074125   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:27.074173   81326 retry.go:31] will retry after 16.507388606s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:28.317794   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:30.318017   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:32.318056   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:34.318298   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:36.817418   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:38.817580   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:41.317407   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:41.838597   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:56:41.891282   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:41.891335   81326 retry.go:31] will retry after 22.626101475s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:43.317591   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:43.581928   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:56:43.635774   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:43.635813   81326 retry.go:31] will retry after 29.761890826s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:45.317977   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:47.817468   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:49.817818   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:51.818094   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:54.317753   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:56.318437   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:58.817821   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:01.317707   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:03.318244   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:57:04.517790   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:57:04.570824   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:57:04.570858   81326 retry.go:31] will retry after 21.453197357s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:57:05.817503   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:07.817615   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:10.317488   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:12.318329   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:57:13.398664   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:57:13.451327   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:57:13.451363   81326 retry.go:31] will retry after 18.539744202s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:57:14.817577   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:16.818008   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:19.317830   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:21.817431   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:23.817855   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:57:26.024797   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:57:26.087554   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:57:26.087679   81326 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1009 18:57:26.317736   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:28.817913   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:31.317538   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:57:31.991746   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:57:32.045057   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:57:32.045194   81326 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 18:57:32.046928   81326 out.go:179] * Enabled addons: 
	I1009 18:57:32.048585   81326 addons.go:514] duration metric: took 1m35.852737584s for enable addons: enabled=[]
	W1009 18:57:33.318217   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:35.817737   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:38.317513   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:40.317684   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:42.317825   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:44.817551   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:46.818116   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:49.317639   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:51.317685   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:53.318174   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:55.817487   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:57.817583   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:00.317490   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:02.817473   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:04.817686   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:06.817818   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:08.817877   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:11.317640   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:13.817637   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:16.317565   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:18.817617   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:21.317386   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:23.817678   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:26.317892   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:28.817872   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:31.317398   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:33.817489   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:36.317642   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:38.817932   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:41.317687   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:43.817793   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:46.318079   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:48.318161   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:50.818473   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:53.317422   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:55.317574   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:57.817543   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:00.317518   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:02.818366   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:05.317450   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:07.817392   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:09.817445   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:11.818236   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:13.818364   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:16.317528   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:18.318349   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:20.817329   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:22.817369   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:24.817503   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:26.817612   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:29.317514   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:31.318361   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:33.817552   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:36.317715   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:38.817633   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:41.317409   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:43.817587   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:45.818329   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:48.317431   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:50.318318   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:52.817414   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:54.817507   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:57.318094   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:59.817580   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:02.317437   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:04.317609   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:06.317877   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:08.817785   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:11.317477   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:13.817723   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:16.317801   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:18.817941   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:21.317577   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:23.817596   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:26.317493   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:28.318451   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:30.817458   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:33.317469   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:35.817460   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:37.817554   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:40.317422   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:42.318284   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:44.817384   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:46.817464   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:48.817666   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:51.317590   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:53.817590   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:56.317720   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:58.817684   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:01.317456   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:03.817462   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:06.317488   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:08.318424   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:10.817449   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:13.317375   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:15.317528   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:17.817472   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:19.817717   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:22.317518   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:24.817464   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:26.817536   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:28.817651   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:31.317524   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:33.817581   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:36.317453   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:38.817454   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:40.818307   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:43.318376   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:45.818334   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:48.318380   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:50.817713   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:53.317436   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:55.818044   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:01:56.318011   81326 node_ready.go:38] duration metric: took 6m0.001141049s for node "ha-608611" to be "Ready" ...
	I1009 19:01:56.320179   81326 out.go:203] 
	W1009 19:01:56.321631   81326 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 19:01:56.321647   81326 out.go:285] * 
	* 
	W1009 19:01:56.323308   81326 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:01:56.324645   81326 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-amd64 -p ha-608611 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 node list --alsologtostderr -v 5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-608611
helpers_test.go:243: (dbg) docker inspect ha-608611:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	        "Created": "2025-10-09T18:44:43.71277862Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 81525,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:55:49.004659898Z",
	            "FinishedAt": "2025-10-09T18:55:47.866160923Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hostname",
	        "HostsPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hosts",
	        "LogPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c-json.log",
	        "Name": "/ha-608611",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-608611:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-608611",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	                "LowerDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-608611",
	                "Source": "/var/lib/docker/volumes/ha-608611/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-608611",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-608611",
	                "name.minikube.sigs.k8s.io": "ha-608611",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bd78a6800d0ca67ea1af19252b5bd24a3e3fc828387489071234de54472900f3",
	            "SandboxKey": "/var/run/docker/netns/bd78a6800d0c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-608611": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:63:77:d6:c6:07",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d41ad8abecfe5e57fea462a2d7f6665aa3879de8bfc3fe0269f712186c14e257",
	                    "EndpointID": "8c9e2b0ece853c05aed38cc16cf83246ef35859c6d45bb06281e9e29114c856e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-608611",
	                        "92fc23109156"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611: exit status 2 (294.246176ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                    ARGS                                     │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-608611 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml            │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- rollout status deployment/busybox                      │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- exec  -- nslookup kubernetes.io                        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default                   │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node    │ ha-608611 node add --alsologtostderr -v 5                                   │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node    │ ha-608611 node stop m02 --alsologtostderr -v 5                              │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node    │ ha-608611 node start m02 --alsologtostderr -v 5                             │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node    │ ha-608611 node list --alsologtostderr -v 5                                  │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:55 UTC │                     │
	│ stop    │ ha-608611 stop --alsologtostderr -v 5                                       │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:55 UTC │ 09 Oct 25 18:55 UTC │
	│ start   │ ha-608611 start --wait true --alsologtostderr -v 5                          │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:55 UTC │                     │
	│ node    │ ha-608611 node list --alsologtostderr -v 5                                  │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:55:48
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:55:48.782369   81326 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:55:48.782604   81326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:55:48.782612   81326 out.go:374] Setting ErrFile to fd 2...
	I1009 18:55:48.782616   81326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:55:48.782782   81326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:55:48.783226   81326 out.go:368] Setting JSON to false
	I1009 18:55:48.784053   81326 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5897,"bootTime":1760030252,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:55:48.784156   81326 start.go:141] virtualization: kvm guest
	I1009 18:55:48.786563   81326 out.go:179] * [ha-608611] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:55:48.788077   81326 notify.go:220] Checking for updates...
	I1009 18:55:48.788126   81326 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:55:48.789665   81326 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:55:48.791095   81326 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:55:48.792613   81326 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:55:48.794226   81326 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:55:48.795794   81326 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:55:48.797638   81326 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:55:48.797748   81326 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:55:48.820855   81326 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:55:48.820923   81326 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:55:48.876094   81326 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:55:48.866734643 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:55:48.876204   81326 docker.go:318] overlay module found
	I1009 18:55:48.877913   81326 out.go:179] * Using the docker driver based on existing profile
	I1009 18:55:48.879222   81326 start.go:305] selected driver: docker
	I1009 18:55:48.879244   81326 start.go:925] validating driver "docker" against &{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:55:48.879315   81326 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:55:48.879420   81326 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:55:48.933369   81326 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:55:48.924148795 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:55:48.933987   81326 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:55:48.934014   81326 cni.go:84] Creating CNI manager for ""
	I1009 18:55:48.934075   81326 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 18:55:48.934183   81326 start.go:349] cluster config:
	{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1009 18:55:48.936388   81326 out.go:179] * Starting "ha-608611" primary control-plane node in "ha-608611" cluster
	I1009 18:55:48.937951   81326 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:55:48.939231   81326 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:55:48.940352   81326 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:55:48.940388   81326 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:55:48.940398   81326 cache.go:64] Caching tarball of preloaded images
	I1009 18:55:48.940435   81326 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:55:48.940519   81326 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:55:48.940534   81326 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:55:48.940631   81326 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:55:48.960098   81326 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 18:55:48.960121   81326 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 18:55:48.960153   81326 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:55:48.960177   81326 start.go:360] acquireMachinesLock for ha-608611: {Name:mk7579977ab708dc80cadd5f1683dbd9d0a08d4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:55:48.960231   81326 start.go:364] duration metric: took 36.84µs to acquireMachinesLock for "ha-608611"
	I1009 18:55:48.960251   81326 start.go:96] Skipping create...Using existing machine configuration
	I1009 18:55:48.960256   81326 fix.go:54] fixHost starting: 
	I1009 18:55:48.960457   81326 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:55:48.977497   81326 fix.go:112] recreateIfNeeded on ha-608611: state=Stopped err=<nil>
	W1009 18:55:48.977523   81326 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 18:55:48.979512   81326 out.go:252] * Restarting existing docker container for "ha-608611" ...
	I1009 18:55:48.979585   81326 cli_runner.go:164] Run: docker start ha-608611
	I1009 18:55:49.217604   81326 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:55:49.237615   81326 kic.go:430] container "ha-608611" state is running.
	I1009 18:55:49.238028   81326 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:55:49.257124   81326 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:55:49.257381   81326 machine.go:93] provisionDockerMachine start ...
	I1009 18:55:49.257452   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:49.276711   81326 main.go:141] libmachine: Using SSH client type: native
	I1009 18:55:49.276957   81326 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 18:55:49.276972   81326 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:55:49.277652   81326 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45654->127.0.0.1:32788: read: connection reset by peer
	I1009 18:55:52.425271   81326 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:55:52.425302   81326 ubuntu.go:182] provisioning hostname "ha-608611"
	I1009 18:55:52.425356   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:52.443305   81326 main.go:141] libmachine: Using SSH client type: native
	I1009 18:55:52.443509   81326 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 18:55:52.443521   81326 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-608611 && echo "ha-608611" | sudo tee /etc/hostname
	I1009 18:55:52.597559   81326 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:55:52.597633   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:52.615251   81326 main.go:141] libmachine: Using SSH client type: native
	I1009 18:55:52.615459   81326 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 18:55:52.615476   81326 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-608611' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-608611/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-608611' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:55:52.760759   81326 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:55:52.760787   81326 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 18:55:52.760833   81326 ubuntu.go:190] setting up certificates
	I1009 18:55:52.760848   81326 provision.go:84] configureAuth start
	I1009 18:55:52.760892   81326 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:55:52.778450   81326 provision.go:143] copyHostCerts
	I1009 18:55:52.778486   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:55:52.778529   81326 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 18:55:52.778546   81326 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:55:52.778622   81326 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 18:55:52.778743   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:55:52.778772   81326 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 18:55:52.778782   81326 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:55:52.778825   81326 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 18:55:52.778905   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:55:52.778928   81326 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 18:55:52.778938   81326 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:55:52.778979   81326 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 18:55:52.779124   81326 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.ha-608611 san=[127.0.0.1 192.168.49.2 ha-608611 localhost minikube]
	I1009 18:55:52.921150   81326 provision.go:177] copyRemoteCerts
	I1009 18:55:52.921251   81326 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:55:52.921302   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:52.938746   81326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:53.041424   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 18:55:53.041487   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 18:55:53.059403   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 18:55:53.059465   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 18:55:53.077545   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 18:55:53.077599   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:55:53.095069   81326 provision.go:87] duration metric: took 334.207036ms to configureAuth
	I1009 18:55:53.095112   81326 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:55:53.095285   81326 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:55:53.095376   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:53.113012   81326 main.go:141] libmachine: Using SSH client type: native
	I1009 18:55:53.113249   81326 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 18:55:53.113266   81326 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:55:53.371650   81326 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:55:53.371676   81326 machine.go:96] duration metric: took 4.114278074s to provisionDockerMachine
	I1009 18:55:53.371688   81326 start.go:293] postStartSetup for "ha-608611" (driver="docker")
	I1009 18:55:53.371701   81326 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:55:53.371771   81326 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:55:53.371842   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:53.390223   81326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:53.493994   81326 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:55:53.497842   81326 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:55:53.497867   81326 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:55:53.497877   81326 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 18:55:53.497926   81326 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 18:55:53.498003   81326 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 18:55:53.498014   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /etc/ssl/certs/148802.pem
	I1009 18:55:53.498111   81326 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:55:53.506094   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:55:53.524346   81326 start.go:296] duration metric: took 152.640721ms for postStartSetup
	I1009 18:55:53.524419   81326 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:55:53.524480   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:53.542600   81326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:53.642517   81326 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:55:53.646989   81326 fix.go:56] duration metric: took 4.686726649s for fixHost
	I1009 18:55:53.647050   81326 start.go:83] releasing machines lock for "ha-608611", held for 4.686806047s
	I1009 18:55:53.647103   81326 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:55:53.665515   81326 ssh_runner.go:195] Run: cat /version.json
	I1009 18:55:53.665578   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:53.665620   81326 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:55:53.665678   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:53.684362   81326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:53.684684   81326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:53.836250   81326 ssh_runner.go:195] Run: systemctl --version
	I1009 18:55:53.842642   81326 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:55:53.877786   81326 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:55:53.882350   81326 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:55:53.882415   81326 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:55:53.890015   81326 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 18:55:53.890039   81326 start.go:495] detecting cgroup driver to use...
	I1009 18:55:53.890072   81326 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:55:53.890126   81326 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:55:53.903830   81326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:55:53.915636   81326 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:55:53.915680   81326 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:55:53.929373   81326 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:55:53.941718   81326 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:55:54.017230   81326 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:55:54.097019   81326 docker.go:234] disabling docker service ...
	I1009 18:55:54.097119   81326 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:55:54.110968   81326 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:55:54.123470   81326 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:55:54.198047   81326 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:55:54.273477   81326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:55:54.285686   81326 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:55:54.299501   81326 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:55:54.299553   81326 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:55:54.307932   81326 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 18:55:54.307990   81326 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:55:54.316516   81326 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:55:54.324850   81326 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:55:54.333127   81326 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:55:54.340857   81326 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:55:54.349439   81326 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:55:54.357872   81326 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:55:54.367094   81326 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:55:54.374845   81326 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:55:54.382734   81326 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:55:54.461355   81326 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:55:54.565572   81326 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:55:54.565624   81326 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:55:54.571180   81326 start.go:563] Will wait 60s for crictl version
	I1009 18:55:54.571234   81326 ssh_runner.go:195] Run: which crictl
	I1009 18:55:54.574912   81326 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:55:54.598972   81326 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:55:54.599070   81326 ssh_runner.go:195] Run: crio --version
	I1009 18:55:54.626916   81326 ssh_runner.go:195] Run: crio --version
	I1009 18:55:54.656626   81326 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:55:54.658243   81326 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:55:54.675913   81326 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:55:54.680110   81326 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:55:54.690492   81326 kubeadm.go:883] updating cluster {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:55:54.690604   81326 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:55:54.690644   81326 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:55:54.722701   81326 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:55:54.722720   81326 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:55:54.722761   81326 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:55:54.747850   81326 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:55:54.747875   81326 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:55:54.747882   81326 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 18:55:54.748003   81326 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-608611 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:55:54.748077   81326 ssh_runner.go:195] Run: crio config
	I1009 18:55:54.792222   81326 cni.go:84] Creating CNI manager for ""
	I1009 18:55:54.792240   81326 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 18:55:54.792253   81326 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:55:54.792274   81326 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-608611 NodeName:ha-608611 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:55:54.792387   81326 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-608611"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:55:54.792445   81326 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:55:54.800546   81326 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:55:54.800612   81326 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:55:54.808306   81326 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 18:55:54.820571   81326 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:55:54.832686   81326 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 18:55:54.845124   81326 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 18:55:54.848713   81326 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:55:54.858608   81326 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:55:54.936048   81326 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:55:54.960660   81326 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611 for IP: 192.168.49.2
	I1009 18:55:54.960682   81326 certs.go:195] generating shared ca certs ...
	I1009 18:55:54.960703   81326 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:55:54.960866   81326 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 18:55:54.960929   81326 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 18:55:54.960943   81326 certs.go:257] generating profile certs ...
	I1009 18:55:54.961058   81326 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key
	I1009 18:55:54.961104   81326 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.71ac3d0a
	I1009 18:55:54.961152   81326 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.71ac3d0a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1009 18:55:55.543578   81326 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.71ac3d0a ...
	I1009 18:55:55.543608   81326 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.71ac3d0a: {Name:mk997984d16894bde965cc8b9fac1d81fe6f4952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:55:55.543774   81326 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.71ac3d0a ...
	I1009 18:55:55.543787   81326 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.71ac3d0a: {Name:mk0466ac68a27af88f893685594376a4479a0b52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:55:55.543856   81326 certs.go:382] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.71ac3d0a -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt
	I1009 18:55:55.543984   81326 certs.go:386] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.71ac3d0a -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key
	I1009 18:55:55.544117   81326 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key
	I1009 18:55:55.544131   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 18:55:55.544165   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 18:55:55.544184   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 18:55:55.544201   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 18:55:55.544214   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 18:55:55.544227   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 18:55:55.544240   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 18:55:55.544255   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 18:55:55.544302   81326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 18:55:55.544330   81326 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 18:55:55.544341   81326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:55:55.544368   81326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 18:55:55.544389   81326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:55:55.544410   81326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 18:55:55.544447   81326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:55:55.544473   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:55:55.544487   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem -> /usr/share/ca-certificates/14880.pem
	I1009 18:55:55.544500   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /usr/share/ca-certificates/148802.pem
	I1009 18:55:55.545009   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:55:55.563316   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:55:55.580320   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:55:55.597589   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:55:55.614398   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1009 18:55:55.631965   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:55:55.648792   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:55:55.666094   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:55:55.683111   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:55:55.700010   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 18:55:55.717340   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 18:55:55.734654   81326 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:55:55.747411   81326 ssh_runner.go:195] Run: openssl version
	I1009 18:55:55.753470   81326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:55:55.761715   81326 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:55:55.765434   81326 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:55:55.765492   81326 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:55:55.798918   81326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:55:55.807085   81326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 18:55:55.815823   81326 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 18:55:55.819621   81326 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:55:55.819677   81326 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 18:55:55.854342   81326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 18:55:55.862610   81326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 18:55:55.870964   81326 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 18:55:55.874789   81326 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:55:55.874839   81326 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 18:55:55.909615   81326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:55:55.918204   81326 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:55:55.922181   81326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 18:55:55.956689   81326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 18:55:55.991888   81326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 18:55:56.025768   81326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 18:55:56.066085   81326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 18:55:56.107192   81326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 18:55:56.142373   81326 kubeadm.go:400] StartCluster: {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:55:56.142453   81326 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:55:56.142506   81326 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:55:56.169309   81326 cri.go:89] found id: ""
	I1009 18:55:56.169373   81326 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:55:56.177273   81326 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 18:55:56.177294   81326 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 18:55:56.177352   81326 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 18:55:56.184818   81326 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:55:56.185183   81326 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:55:56.185297   81326 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-11374/kubeconfig needs updating (will repair): [kubeconfig missing "ha-608611" cluster setting kubeconfig missing "ha-608611" context setting]
	I1009 18:55:56.185607   81326 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/kubeconfig: {Name:mke7bf8fc0811179129dfd61e3a963860adf8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:55:56.186078   81326 kapi.go:59] client config for ha-608611: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 18:55:56.186554   81326 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 18:55:56.186572   81326 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 18:55:56.186576   81326 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 18:55:56.186579   81326 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 18:55:56.186582   81326 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 18:55:56.186644   81326 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 18:55:56.186913   81326 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 18:55:56.194885   81326 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 18:55:56.194918   81326 kubeadm.go:601] duration metric: took 17.618968ms to restartPrimaryControlPlane
	I1009 18:55:56.194926   81326 kubeadm.go:402] duration metric: took 52.565569ms to StartCluster
	I1009 18:55:56.194954   81326 settings.go:142] acquiring lock: {Name:mke1fc24bd3c282bdce5b5999d4611ed242ead0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:55:56.195014   81326 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:55:56.195534   81326 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/kubeconfig: {Name:mke7bf8fc0811179129dfd61e3a963860adf8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:55:56.195769   81326 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:55:56.195852   81326 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 18:55:56.195922   81326 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:55:56.195932   81326 addons.go:69] Setting default-storageclass=true in profile "ha-608611"
	I1009 18:55:56.195965   81326 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-608611"
	I1009 18:55:56.195925   81326 addons.go:69] Setting storage-provisioner=true in profile "ha-608611"
	I1009 18:55:56.195992   81326 addons.go:238] Setting addon storage-provisioner=true in "ha-608611"
	I1009 18:55:56.196019   81326 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:55:56.196264   81326 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:55:56.196400   81326 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:55:56.199213   81326 out.go:179] * Verifying Kubernetes components...
	I1009 18:55:56.200413   81326 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:55:56.216177   81326 kapi.go:59] client config for ha-608611: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 18:55:56.216605   81326 addons.go:238] Setting addon default-storageclass=true in "ha-608611"
	I1009 18:55:56.216648   81326 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:55:56.217159   81326 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:55:56.217419   81326 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 18:55:56.219196   81326 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:55:56.219223   81326 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 18:55:56.219282   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:56.243943   81326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:56.245925   81326 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 18:55:56.245944   81326 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 18:55:56.245984   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:56.263872   81326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:56.303542   81326 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:55:56.316831   81326 node_ready.go:35] waiting up to 6m0s for node "ha-608611" to be "Ready" ...
	I1009 18:55:56.352305   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:55:56.372260   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:55:56.412054   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:56.412090   81326 retry.go:31] will retry after 210.547469ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:55:56.427954   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:56.427986   81326 retry.go:31] will retry after 365.761186ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:56.623265   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:55:56.675568   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:56.675605   81326 retry.go:31] will retry after 331.492885ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:56.794903   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:55:56.846158   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:56.846190   81326 retry.go:31] will retry after 366.903412ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.007285   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:55:57.058254   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.058285   81326 retry.go:31] will retry after 440.442086ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.213614   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:55:57.266588   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.266622   81326 retry.go:31] will retry after 403.844371ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.499702   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:55:57.552130   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.552176   81326 retry.go:31] will retry after 1.153605517s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.671430   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:55:57.724158   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.724189   81326 retry.go:31] will retry after 1.186791372s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:55:58.317829   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:55:58.706293   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:55:58.758710   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:58.758743   81326 retry.go:31] will retry after 1.743017897s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:58.911763   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:55:58.963731   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:58.963764   81326 retry.go:31] will retry after 777.451228ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:59.742307   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:55:59.794404   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:59.794436   81326 retry.go:31] will retry after 1.290318475s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:00.318311   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:00.502629   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:56:00.555745   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:00.555777   81326 retry.go:31] will retry after 2.524197607s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:01.084941   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:56:01.136443   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:01.136470   81326 retry.go:31] will retry after 1.577041718s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:02.713959   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:56:02.768944   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:02.769029   81326 retry.go:31] will retry after 2.739822337s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:02.817505   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:03.080936   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:56:03.135285   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:03.135312   81326 retry.go:31] will retry after 2.274306578s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:04.818421   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:05.409777   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:56:05.464614   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:05.464653   81326 retry.go:31] will retry after 2.562562636s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:05.509838   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:56:05.563089   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:05.563116   81326 retry.go:31] will retry after 7.257063106s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:07.317778   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:08.028172   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:56:08.085551   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:08.085584   81326 retry.go:31] will retry after 5.304285212s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:09.817756   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:12.317749   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:12.820933   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:56:12.874853   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:12.874882   81326 retry.go:31] will retry after 14.146267666s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:13.390058   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:56:13.445661   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:13.445690   81326 retry.go:31] will retry after 12.009663375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:14.317787   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:16.817710   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:19.317672   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:21.817409   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:23.817676   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:25.455892   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:56:25.508571   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:25.508602   81326 retry.go:31] will retry after 16.328819921s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:26.317511   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:27.021826   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:56:27.074125   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:27.074173   81326 retry.go:31] will retry after 16.507388606s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:28.317794   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:30.318017   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:32.318056   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:34.318298   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:36.817418   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:38.817580   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:41.317407   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:41.838597   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:56:41.891282   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:41.891335   81326 retry.go:31] will retry after 22.626101475s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:43.317591   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:43.581928   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:56:43.635774   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:43.635813   81326 retry.go:31] will retry after 29.761890826s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:45.317977   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:47.817468   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:49.817818   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:51.818094   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:54.317753   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:56.318437   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:58.817821   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:01.317707   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:03.318244   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:57:04.517790   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:57:04.570824   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:57:04.570858   81326 retry.go:31] will retry after 21.453197357s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:57:05.817503   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:07.817615   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:10.317488   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:12.318329   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:57:13.398664   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:57:13.451327   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:57:13.451363   81326 retry.go:31] will retry after 18.539744202s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:57:14.817577   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:16.818008   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:19.317830   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:21.817431   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:23.817855   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:57:26.024797   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:57:26.087554   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:57:26.087679   81326 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1009 18:57:26.317736   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:28.817913   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:31.317538   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:57:31.991746   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:57:32.045057   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:57:32.045194   81326 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 18:57:32.046928   81326 out.go:179] * Enabled addons: 
	I1009 18:57:32.048585   81326 addons.go:514] duration metric: took 1m35.852737584s for enable addons: enabled=[]
	W1009 18:57:33.318217   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:35.817737   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:38.317513   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:40.317684   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:42.317825   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:44.817551   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:46.818116   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:49.317639   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:51.317685   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:53.318174   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:55.817487   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:57.817583   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:00.317490   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:02.817473   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:04.817686   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:06.817818   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:08.817877   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:11.317640   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:13.817637   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:16.317565   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:18.817617   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:21.317386   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:23.817678   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:26.317892   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:28.817872   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:31.317398   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:33.817489   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:36.317642   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:38.817932   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:41.317687   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:43.817793   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:46.318079   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:48.318161   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:50.818473   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:53.317422   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:55.317574   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:57.817543   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:00.317518   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:02.818366   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:05.317450   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:07.817392   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:09.817445   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:11.818236   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:13.818364   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:16.317528   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:18.318349   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:20.817329   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:22.817369   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:24.817503   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:26.817612   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:29.317514   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:31.318361   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:33.817552   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:36.317715   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:38.817633   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:41.317409   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:43.817587   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:45.818329   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:48.317431   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:50.318318   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:52.817414   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:54.817507   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:57.318094   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:59.817580   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:02.317437   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:04.317609   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:06.317877   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:08.817785   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:11.317477   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:13.817723   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:16.317801   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:18.817941   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:21.317577   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:23.817596   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:26.317493   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:28.318451   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:30.817458   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:33.317469   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:35.817460   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:37.817554   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:40.317422   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:42.318284   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:44.817384   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:46.817464   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:48.817666   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:51.317590   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:53.817590   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:56.317720   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:58.817684   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:01.317456   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:03.817462   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:06.317488   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:08.318424   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:10.817449   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:13.317375   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:15.317528   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:17.817472   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:19.817717   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:22.317518   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:24.817464   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:26.817536   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:28.817651   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:31.317524   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:33.817581   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:36.317453   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:38.817454   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:40.818307   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:43.318376   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:45.818334   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:48.318380   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:50.817713   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:53.317436   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:55.818044   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:01:56.318011   81326 node_ready.go:38] duration metric: took 6m0.001141049s for node "ha-608611" to be "Ready" ...
	I1009 19:01:56.320179   81326 out.go:203] 
	W1009 19:01:56.321631   81326 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 19:01:56.321647   81326 out.go:285] * 
	W1009 19:01:56.323308   81326 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:01:56.324645   81326 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:01:52 ha-608611 crio[519]: time="2025-10-09T19:01:52.072459628Z" level=info msg="createCtr: deleting container cee82ca096ada2745ddfd20399be0511826b61e086329781f9b2247a3f7121f6 from storage" id=c50413e0-d421-4959-905a-4d1005a1f36a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:52 ha-608611 crio[519]: time="2025-10-09T19:01:52.074921498Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-608611_kube-system_b479c8e1034fd1754049af8325a8c50b_0" id=db4c736a-2288-48c8-8b24-876eaba6d487 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:52 ha-608611 crio[519]: time="2025-10-09T19:01:52.075219493Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-608611_kube-system_cc9d45d79042caf53449ab6317965aad_0" id=c50413e0-d421-4959-905a-4d1005a1f36a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.045242534Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=28e234cb-d240-47e3-869e-ed1c2e16a7cc name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.045371754Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=d6a9ff02-69dc-418a-8463-e96a6076d37d name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.046045414Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=d2871d16-d577-4a09-a337-69cf3392fbd6 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.04607563Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=d9af252a-ed9a-4f37-b905-7bd43bd870b7 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.047302632Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-608611/kube-apiserver" id=713646c7-d288-4595-b7d5-3672898aaa27 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.047703525Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.048079465Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-608611/kube-scheduler" id=bcbc9d53-4f9c-4dcb-80b9-c1c94f638967 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.048730686Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.053214604Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.053763132Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.055064813Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.055575645Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.073197289Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=713646c7-d288-4595-b7d5-3672898aaa27 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.074521483Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=bcbc9d53-4f9c-4dcb-80b9-c1c94f638967 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.074607421Z" level=info msg="createCtr: deleting container ID e9f05c8892ff26fa5ccef86a659c31e9226fe862a51863149b001698759aacb7 from idIndex" id=713646c7-d288-4595-b7d5-3672898aaa27 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.074648652Z" level=info msg="createCtr: removing container e9f05c8892ff26fa5ccef86a659c31e9226fe862a51863149b001698759aacb7" id=713646c7-d288-4595-b7d5-3672898aaa27 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.074688721Z" level=info msg="createCtr: deleting container e9f05c8892ff26fa5ccef86a659c31e9226fe862a51863149b001698759aacb7 from storage" id=713646c7-d288-4595-b7d5-3672898aaa27 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.075873597Z" level=info msg="createCtr: deleting container ID e0a6aa110e439ce93809dcc873bdb0ebf7b51a92ab1d8acad64b2c5a5ad954da from idIndex" id=bcbc9d53-4f9c-4dcb-80b9-c1c94f638967 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.075908928Z" level=info msg="createCtr: removing container e0a6aa110e439ce93809dcc873bdb0ebf7b51a92ab1d8acad64b2c5a5ad954da" id=bcbc9d53-4f9c-4dcb-80b9-c1c94f638967 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.075943992Z" level=info msg="createCtr: deleting container e0a6aa110e439ce93809dcc873bdb0ebf7b51a92ab1d8acad64b2c5a5ad954da from storage" id=bcbc9d53-4f9c-4dcb-80b9-c1c94f638967 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.077834944Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-608611_kube-system_8c1c5aee1432fcfd0e6519753fb0d668_0" id=713646c7-d288-4595-b7d5-3672898aaa27 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.078091888Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-608611_kube-system_aa829d6ea417a48ecaa6f5cad3254d94_0" id=bcbc9d53-4f9c-4dcb-80b9-c1c94f638967 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:01:57.281156    2011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:01:57.281695    2011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:01:57.283280    2011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:01:57.283785    2011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:01:57.285356    2011 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:01:57 up  1:44,  0 user,  load average: 0.84, 0.44, 0.21
	Linux ha-608611 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:01:52 ha-608611 kubelet[669]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:01:52 ha-608611 kubelet[669]:  > podSandboxID="e5545f56553f13edf8ed7b4d48ff629fc878deb4b5a926ba40a84cddc3e339b6"
	Oct 09 19:01:52 ha-608611 kubelet[669]: E1009 19:01:52.075500     669 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:01:52 ha-608611 kubelet[669]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-608611_kube-system(cc9d45d79042caf53449ab6317965aad): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:01:52 ha-608611 kubelet[669]:  > logger="UnhandledError"
	Oct 09 19:01:52 ha-608611 kubelet[669]: E1009 19:01:52.076638     669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-608611" podUID="cc9d45d79042caf53449ab6317965aad"
	Oct 09 19:01:53 ha-608611 kubelet[669]: E1009 19:01:53.044824     669 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 19:01:53 ha-608611 kubelet[669]: E1009 19:01:53.044946     669 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 19:01:53 ha-608611 kubelet[669]: E1009 19:01:53.078187     669 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:01:53 ha-608611 kubelet[669]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:01:53 ha-608611 kubelet[669]:  > podSandboxID="2aa0bb22fe65d4986dc9aea3a26f98b8fe8d898e11d03753d94e780f1d08d143"
	Oct 09 19:01:53 ha-608611 kubelet[669]: E1009 19:01:53.078296     669 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:01:53 ha-608611 kubelet[669]:         container kube-apiserver start failed in pod kube-apiserver-ha-608611_kube-system(8c1c5aee1432fcfd0e6519753fb0d668): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:01:53 ha-608611 kubelet[669]:  > logger="UnhandledError"
	Oct 09 19:01:53 ha-608611 kubelet[669]: E1009 19:01:53.078325     669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-608611" podUID="8c1c5aee1432fcfd0e6519753fb0d668"
	Oct 09 19:01:53 ha-608611 kubelet[669]: E1009 19:01:53.078326     669 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:01:53 ha-608611 kubelet[669]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:01:53 ha-608611 kubelet[669]:  > podSandboxID="be3d21cea8492905ced72270cf5ee2be1474dc62f2c5be112263d2c070371c32"
	Oct 09 19:01:53 ha-608611 kubelet[669]: E1009 19:01:53.078407     669 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:01:53 ha-608611 kubelet[669]:         container kube-scheduler start failed in pod kube-scheduler-ha-608611_kube-system(aa829d6ea417a48ecaa6f5cad3254d94): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:01:53 ha-608611 kubelet[669]:  > logger="UnhandledError"
	Oct 09 19:01:53 ha-608611 kubelet[669]: E1009 19:01:53.079540     669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-608611" podUID="aa829d6ea417a48ecaa6f5cad3254d94"
	Oct 09 19:01:55 ha-608611 kubelet[669]: E1009 19:01:55.058005     669 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-608611\" not found"
	Oct 09 19:01:55 ha-608611 kubelet[669]: E1009 19:01:55.543425     669 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 09 19:01:56 ha-608611 kubelet[669]: E1009 19:01:56.136499     669 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-608611.186ce78ed4c19733  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-608611,UID:ha-608611,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-608611 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-608611,},FirstTimestamp:2025-10-09 18:55:55.035850547 +0000 UTC m=+0.073552050,LastTimestamp:2025-10-09 18:55:55.035850547 +0000 UTC m=+0.073552050,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-608611,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611: exit status 2 (283.529537ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-608611" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (370.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (1.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 node delete m03 --alsologtostderr -v 5
E1009 19:01:57.693474   14880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 node delete m03 --alsologtostderr -v 5: exit status 103 (241.428183ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-608611 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-608611"

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:01:57.697516   85397 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:01:57.697814   85397 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:01:57.697825   85397 out.go:374] Setting ErrFile to fd 2...
	I1009 19:01:57.697829   85397 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:01:57.698014   85397 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 19:01:57.698334   85397 mustload.go:65] Loading cluster: ha-608611
	I1009 19:01:57.698674   85397 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:01:57.699054   85397 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 19:01:57.716262   85397 host.go:66] Checking if "ha-608611" exists ...
	I1009 19:01:57.716527   85397 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:01:57.770447   85397 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:01:57.761076491 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:01:57.770554   85397 api_server.go:166] Checking apiserver status ...
	I1009 19:01:57.770594   85397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:01:57.770626   85397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:01:57.787307   85397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	W1009 19:01:57.891100   85397 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:01:57.892907   85397 out.go:179] * The control-plane node ha-608611 apiserver is not running: (state=Stopped)
	I1009 19:01:57.894322   85397 out.go:179]   To start a cluster, run: "minikube start -p ha-608611"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-linux-amd64 -p ha-608611 node delete m03 --alsologtostderr -v 5": exit status 103
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5: exit status 2 (284.494196ms)

                                                
                                                
-- stdout --
	ha-608611
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:01:57.938760   85491 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:01:57.939021   85491 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:01:57.939031   85491 out.go:374] Setting ErrFile to fd 2...
	I1009 19:01:57.939038   85491 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:01:57.939259   85491 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 19:01:57.939486   85491 out.go:368] Setting JSON to false
	I1009 19:01:57.939518   85491 mustload.go:65] Loading cluster: ha-608611
	I1009 19:01:57.939644   85491 notify.go:220] Checking for updates...
	I1009 19:01:57.939917   85491 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:01:57.939932   85491 status.go:174] checking status of ha-608611 ...
	I1009 19:01:57.940382   85491 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 19:01:57.958887   85491 status.go:371] ha-608611 host status = "Running" (err=<nil>)
	I1009 19:01:57.958924   85491 host.go:66] Checking if "ha-608611" exists ...
	I1009 19:01:57.959218   85491 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 19:01:57.976453   85491 host.go:66] Checking if "ha-608611" exists ...
	I1009 19:01:57.976686   85491 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:01:57.976719   85491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:01:57.995419   85491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:01:58.094178   85491 ssh_runner.go:195] Run: systemctl --version
	I1009 19:01:58.100218   85491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:01:58.112299   85491 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:01:58.168898   85491 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:01:58.158376844 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:01:58.169450   85491 kubeconfig.go:125] found "ha-608611" server: "https://192.168.49.2:8443"
	I1009 19:01:58.169480   85491 api_server.go:166] Checking apiserver status ...
	I1009 19:01:58.169520   85491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1009 19:01:58.179441   85491 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:01:58.179468   85491 status.go:463] ha-608611 apiserver status = Running (err=<nil>)
	I1009 19:01:58.179477   85491 status.go:176] ha-608611 status: &{Name:ha-608611 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-608611
helpers_test.go:243: (dbg) docker inspect ha-608611:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	        "Created": "2025-10-09T18:44:43.71277862Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 81525,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:55:49.004659898Z",
	            "FinishedAt": "2025-10-09T18:55:47.866160923Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hostname",
	        "HostsPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hosts",
	        "LogPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c-json.log",
	        "Name": "/ha-608611",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-608611:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-608611",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	                "LowerDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-608611",
	                "Source": "/var/lib/docker/volumes/ha-608611/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-608611",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-608611",
	                "name.minikube.sigs.k8s.io": "ha-608611",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bd78a6800d0ca67ea1af19252b5bd24a3e3fc828387489071234de54472900f3",
	            "SandboxKey": "/var/run/docker/netns/bd78a6800d0c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-608611": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:63:77:d6:c6:07",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d41ad8abecfe5e57fea462a2d7f6665aa3879de8bfc3fe0269f712186c14e257",
	                    "EndpointID": "8c9e2b0ece853c05aed38cc16cf83246ef35859c6d45bb06281e9e29114c856e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-608611",
	                        "92fc23109156"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611: exit status 2 (280.66832ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                    ARGS                                     │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-608611 kubectl -- rollout status deployment/busybox                      │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- exec  -- nslookup kubernetes.io                        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default                   │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node    │ ha-608611 node add --alsologtostderr -v 5                                   │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node    │ ha-608611 node stop m02 --alsologtostderr -v 5                              │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node    │ ha-608611 node start m02 --alsologtostderr -v 5                             │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node    │ ha-608611 node list --alsologtostderr -v 5                                  │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:55 UTC │                     │
	│ stop    │ ha-608611 stop --alsologtostderr -v 5                                       │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:55 UTC │ 09 Oct 25 18:55 UTC │
	│ start   │ ha-608611 start --wait true --alsologtostderr -v 5                          │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:55 UTC │                     │
	│ node    │ ha-608611 node list --alsologtostderr -v 5                                  │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │                     │
	│ node    │ ha-608611 node delete m03 --alsologtostderr -v 5                            │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:55:48
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:55:48.782369   81326 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:55:48.782604   81326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:55:48.782612   81326 out.go:374] Setting ErrFile to fd 2...
	I1009 18:55:48.782616   81326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:55:48.782782   81326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:55:48.783226   81326 out.go:368] Setting JSON to false
	I1009 18:55:48.784053   81326 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5897,"bootTime":1760030252,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:55:48.784156   81326 start.go:141] virtualization: kvm guest
	I1009 18:55:48.786563   81326 out.go:179] * [ha-608611] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:55:48.788077   81326 notify.go:220] Checking for updates...
	I1009 18:55:48.788126   81326 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:55:48.789665   81326 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:55:48.791095   81326 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:55:48.792613   81326 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:55:48.794226   81326 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:55:48.795794   81326 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:55:48.797638   81326 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:55:48.797748   81326 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:55:48.820855   81326 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:55:48.820923   81326 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:55:48.876094   81326 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:55:48.866734643 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:55:48.876204   81326 docker.go:318] overlay module found
	I1009 18:55:48.877913   81326 out.go:179] * Using the docker driver based on existing profile
	I1009 18:55:48.879222   81326 start.go:305] selected driver: docker
	I1009 18:55:48.879244   81326 start.go:925] validating driver "docker" against &{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:55:48.879315   81326 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:55:48.879420   81326 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:55:48.933369   81326 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:55:48.924148795 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:55:48.933987   81326 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:55:48.934014   81326 cni.go:84] Creating CNI manager for ""
	I1009 18:55:48.934075   81326 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 18:55:48.934183   81326 start.go:349] cluster config:
	{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1009 18:55:48.936388   81326 out.go:179] * Starting "ha-608611" primary control-plane node in "ha-608611" cluster
	I1009 18:55:48.937951   81326 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:55:48.939231   81326 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:55:48.940352   81326 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:55:48.940388   81326 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:55:48.940398   81326 cache.go:64] Caching tarball of preloaded images
	I1009 18:55:48.940435   81326 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:55:48.940519   81326 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:55:48.940534   81326 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:55:48.940631   81326 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:55:48.960098   81326 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 18:55:48.960121   81326 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 18:55:48.960153   81326 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:55:48.960177   81326 start.go:360] acquireMachinesLock for ha-608611: {Name:mk7579977ab708dc80cadd5f1683dbd9d0a08d4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:55:48.960231   81326 start.go:364] duration metric: took 36.84µs to acquireMachinesLock for "ha-608611"
	I1009 18:55:48.960251   81326 start.go:96] Skipping create...Using existing machine configuration
	I1009 18:55:48.960256   81326 fix.go:54] fixHost starting: 
	I1009 18:55:48.960457   81326 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:55:48.977497   81326 fix.go:112] recreateIfNeeded on ha-608611: state=Stopped err=<nil>
	W1009 18:55:48.977523   81326 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 18:55:48.979512   81326 out.go:252] * Restarting existing docker container for "ha-608611" ...
	I1009 18:55:48.979585   81326 cli_runner.go:164] Run: docker start ha-608611
	I1009 18:55:49.217604   81326 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:55:49.237615   81326 kic.go:430] container "ha-608611" state is running.
	I1009 18:55:49.238028   81326 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:55:49.257124   81326 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:55:49.257381   81326 machine.go:93] provisionDockerMachine start ...
	I1009 18:55:49.257452   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:49.276711   81326 main.go:141] libmachine: Using SSH client type: native
	I1009 18:55:49.276957   81326 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 18:55:49.276972   81326 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:55:49.277652   81326 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45654->127.0.0.1:32788: read: connection reset by peer
	I1009 18:55:52.425271   81326 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:55:52.425302   81326 ubuntu.go:182] provisioning hostname "ha-608611"
	I1009 18:55:52.425356   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:52.443305   81326 main.go:141] libmachine: Using SSH client type: native
	I1009 18:55:52.443509   81326 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 18:55:52.443521   81326 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-608611 && echo "ha-608611" | sudo tee /etc/hostname
	I1009 18:55:52.597559   81326 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:55:52.597633   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:52.615251   81326 main.go:141] libmachine: Using SSH client type: native
	I1009 18:55:52.615459   81326 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 18:55:52.615476   81326 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-608611' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-608611/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-608611' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:55:52.760759   81326 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:55:52.760787   81326 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 18:55:52.760833   81326 ubuntu.go:190] setting up certificates
	I1009 18:55:52.760848   81326 provision.go:84] configureAuth start
	I1009 18:55:52.760892   81326 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:55:52.778450   81326 provision.go:143] copyHostCerts
	I1009 18:55:52.778486   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:55:52.778529   81326 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 18:55:52.778546   81326 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:55:52.778622   81326 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 18:55:52.778743   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:55:52.778772   81326 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 18:55:52.778782   81326 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:55:52.778825   81326 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 18:55:52.778905   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:55:52.778928   81326 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 18:55:52.778938   81326 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:55:52.778979   81326 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 18:55:52.779124   81326 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.ha-608611 san=[127.0.0.1 192.168.49.2 ha-608611 localhost minikube]
	I1009 18:55:52.921150   81326 provision.go:177] copyRemoteCerts
	I1009 18:55:52.921251   81326 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:55:52.921302   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:52.938746   81326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:53.041424   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 18:55:53.041487   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 18:55:53.059403   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 18:55:53.059465   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 18:55:53.077545   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 18:55:53.077599   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:55:53.095069   81326 provision.go:87] duration metric: took 334.207036ms to configureAuth
	I1009 18:55:53.095112   81326 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:55:53.095285   81326 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:55:53.095376   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:53.113012   81326 main.go:141] libmachine: Using SSH client type: native
	I1009 18:55:53.113249   81326 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 18:55:53.113266   81326 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:55:53.371650   81326 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:55:53.371676   81326 machine.go:96] duration metric: took 4.114278074s to provisionDockerMachine
	I1009 18:55:53.371688   81326 start.go:293] postStartSetup for "ha-608611" (driver="docker")
	I1009 18:55:53.371701   81326 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:55:53.371771   81326 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:55:53.371842   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:53.390223   81326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:53.493994   81326 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:55:53.497842   81326 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:55:53.497867   81326 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:55:53.497877   81326 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 18:55:53.497926   81326 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 18:55:53.498003   81326 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 18:55:53.498014   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /etc/ssl/certs/148802.pem
	I1009 18:55:53.498111   81326 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:55:53.506094   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:55:53.524346   81326 start.go:296] duration metric: took 152.640721ms for postStartSetup
	I1009 18:55:53.524419   81326 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:55:53.524480   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:53.542600   81326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:53.642517   81326 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:55:53.646989   81326 fix.go:56] duration metric: took 4.686726649s for fixHost
	I1009 18:55:53.647050   81326 start.go:83] releasing machines lock for "ha-608611", held for 4.686806047s
	I1009 18:55:53.647103   81326 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:55:53.665515   81326 ssh_runner.go:195] Run: cat /version.json
	I1009 18:55:53.665578   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:53.665620   81326 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:55:53.665678   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:53.684362   81326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:53.684684   81326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:53.836250   81326 ssh_runner.go:195] Run: systemctl --version
	I1009 18:55:53.842642   81326 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:55:53.877786   81326 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:55:53.882350   81326 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:55:53.882415   81326 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:55:53.890015   81326 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 18:55:53.890039   81326 start.go:495] detecting cgroup driver to use...
	I1009 18:55:53.890072   81326 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:55:53.890126   81326 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:55:53.903830   81326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:55:53.915636   81326 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:55:53.915680   81326 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:55:53.929373   81326 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:55:53.941718   81326 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:55:54.017230   81326 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:55:54.097019   81326 docker.go:234] disabling docker service ...
	I1009 18:55:54.097119   81326 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:55:54.110968   81326 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:55:54.123470   81326 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:55:54.198047   81326 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:55:54.273477   81326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:55:54.285686   81326 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:55:54.299501   81326 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:55:54.299553   81326 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:55:54.307932   81326 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 18:55:54.307990   81326 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:55:54.316516   81326 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:55:54.324850   81326 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:55:54.333127   81326 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:55:54.340857   81326 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:55:54.349439   81326 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:55:54.357872   81326 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:55:54.367094   81326 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:55:54.374845   81326 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:55:54.382734   81326 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:55:54.461355   81326 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:55:54.565572   81326 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:55:54.565624   81326 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:55:54.571180   81326 start.go:563] Will wait 60s for crictl version
	I1009 18:55:54.571234   81326 ssh_runner.go:195] Run: which crictl
	I1009 18:55:54.574912   81326 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:55:54.598972   81326 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:55:54.599070   81326 ssh_runner.go:195] Run: crio --version
	I1009 18:55:54.626916   81326 ssh_runner.go:195] Run: crio --version
	I1009 18:55:54.656626   81326 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:55:54.658243   81326 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:55:54.675913   81326 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:55:54.680110   81326 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:55:54.690492   81326 kubeadm.go:883] updating cluster {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:55:54.690604   81326 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:55:54.690644   81326 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:55:54.722701   81326 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:55:54.722720   81326 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:55:54.722761   81326 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:55:54.747850   81326 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:55:54.747875   81326 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:55:54.747882   81326 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 18:55:54.748003   81326 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-608611 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:55:54.748077   81326 ssh_runner.go:195] Run: crio config
	I1009 18:55:54.792222   81326 cni.go:84] Creating CNI manager for ""
	I1009 18:55:54.792240   81326 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 18:55:54.792253   81326 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:55:54.792274   81326 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-608611 NodeName:ha-608611 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:55:54.792387   81326 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-608611"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:55:54.792445   81326 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:55:54.800546   81326 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:55:54.800612   81326 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:55:54.808306   81326 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 18:55:54.820571   81326 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:55:54.832686   81326 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 18:55:54.845124   81326 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 18:55:54.848713   81326 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:55:54.858608   81326 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:55:54.936048   81326 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:55:54.960660   81326 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611 for IP: 192.168.49.2
	I1009 18:55:54.960682   81326 certs.go:195] generating shared ca certs ...
	I1009 18:55:54.960703   81326 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:55:54.960866   81326 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 18:55:54.960929   81326 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 18:55:54.960943   81326 certs.go:257] generating profile certs ...
	I1009 18:55:54.961058   81326 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key
	I1009 18:55:54.961104   81326 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.71ac3d0a
	I1009 18:55:54.961152   81326 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.71ac3d0a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1009 18:55:55.543578   81326 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.71ac3d0a ...
	I1009 18:55:55.543608   81326 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.71ac3d0a: {Name:mk997984d16894bde965cc8b9fac1d81fe6f4952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:55:55.543774   81326 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.71ac3d0a ...
	I1009 18:55:55.543787   81326 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.71ac3d0a: {Name:mk0466ac68a27af88f893685594376a4479a0b52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:55:55.543856   81326 certs.go:382] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.71ac3d0a -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt
	I1009 18:55:55.543984   81326 certs.go:386] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.71ac3d0a -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key
	I1009 18:55:55.544117   81326 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key
	I1009 18:55:55.544131   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 18:55:55.544165   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 18:55:55.544184   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 18:55:55.544201   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 18:55:55.544214   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 18:55:55.544227   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 18:55:55.544240   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 18:55:55.544255   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 18:55:55.544302   81326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 18:55:55.544330   81326 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 18:55:55.544341   81326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:55:55.544368   81326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 18:55:55.544389   81326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:55:55.544410   81326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 18:55:55.544447   81326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:55:55.544473   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:55:55.544487   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem -> /usr/share/ca-certificates/14880.pem
	I1009 18:55:55.544500   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /usr/share/ca-certificates/148802.pem
	I1009 18:55:55.545009   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:55:55.563316   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:55:55.580320   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:55:55.597589   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:55:55.614398   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1009 18:55:55.631965   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:55:55.648792   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:55:55.666094   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:55:55.683111   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:55:55.700010   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 18:55:55.717340   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 18:55:55.734654   81326 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:55:55.747411   81326 ssh_runner.go:195] Run: openssl version
	I1009 18:55:55.753470   81326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:55:55.761715   81326 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:55:55.765434   81326 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:55:55.765492   81326 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:55:55.798918   81326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:55:55.807085   81326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 18:55:55.815823   81326 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 18:55:55.819621   81326 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:55:55.819677   81326 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 18:55:55.854342   81326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 18:55:55.862610   81326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 18:55:55.870964   81326 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 18:55:55.874789   81326 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:55:55.874839   81326 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 18:55:55.909615   81326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:55:55.918204   81326 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:55:55.922181   81326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 18:55:55.956689   81326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 18:55:55.991888   81326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 18:55:56.025768   81326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 18:55:56.066085   81326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 18:55:56.107192   81326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 18:55:56.142373   81326 kubeadm.go:400] StartCluster: {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:55:56.142453   81326 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:55:56.142506   81326 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:55:56.169309   81326 cri.go:89] found id: ""
	I1009 18:55:56.169373   81326 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:55:56.177273   81326 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 18:55:56.177294   81326 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 18:55:56.177352   81326 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 18:55:56.184818   81326 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:55:56.185183   81326 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:55:56.185297   81326 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-11374/kubeconfig needs updating (will repair): [kubeconfig missing "ha-608611" cluster setting kubeconfig missing "ha-608611" context setting]
	I1009 18:55:56.185607   81326 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/kubeconfig: {Name:mke7bf8fc0811179129dfd61e3a963860adf8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:55:56.186078   81326 kapi.go:59] client config for ha-608611: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 18:55:56.186554   81326 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 18:55:56.186572   81326 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 18:55:56.186576   81326 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 18:55:56.186579   81326 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 18:55:56.186582   81326 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 18:55:56.186644   81326 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 18:55:56.186913   81326 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 18:55:56.194885   81326 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 18:55:56.194918   81326 kubeadm.go:601] duration metric: took 17.618968ms to restartPrimaryControlPlane
	I1009 18:55:56.194926   81326 kubeadm.go:402] duration metric: took 52.565569ms to StartCluster
	I1009 18:55:56.194954   81326 settings.go:142] acquiring lock: {Name:mke1fc24bd3c282bdce5b5999d4611ed242ead0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:55:56.195014   81326 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:55:56.195534   81326 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/kubeconfig: {Name:mke7bf8fc0811179129dfd61e3a963860adf8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:55:56.195769   81326 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:55:56.195852   81326 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 18:55:56.195922   81326 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:55:56.195932   81326 addons.go:69] Setting default-storageclass=true in profile "ha-608611"
	I1009 18:55:56.195965   81326 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-608611"
	I1009 18:55:56.195925   81326 addons.go:69] Setting storage-provisioner=true in profile "ha-608611"
	I1009 18:55:56.195992   81326 addons.go:238] Setting addon storage-provisioner=true in "ha-608611"
	I1009 18:55:56.196019   81326 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:55:56.196264   81326 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:55:56.196400   81326 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:55:56.199213   81326 out.go:179] * Verifying Kubernetes components...
	I1009 18:55:56.200413   81326 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:55:56.216177   81326 kapi.go:59] client config for ha-608611: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 18:55:56.216605   81326 addons.go:238] Setting addon default-storageclass=true in "ha-608611"
	I1009 18:55:56.216648   81326 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:55:56.217159   81326 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:55:56.217419   81326 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 18:55:56.219196   81326 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:55:56.219223   81326 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 18:55:56.219282   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:56.243943   81326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:56.245925   81326 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 18:55:56.245944   81326 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 18:55:56.245984   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:56.263872   81326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:56.303542   81326 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:55:56.316831   81326 node_ready.go:35] waiting up to 6m0s for node "ha-608611" to be "Ready" ...
	I1009 18:55:56.352305   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:55:56.372260   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:55:56.412054   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:56.412090   81326 retry.go:31] will retry after 210.547469ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:55:56.427954   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:56.427986   81326 retry.go:31] will retry after 365.761186ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:56.623265   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:55:56.675568   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:56.675605   81326 retry.go:31] will retry after 331.492885ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:56.794903   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:55:56.846158   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:56.846190   81326 retry.go:31] will retry after 366.903412ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.007285   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:55:57.058254   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.058285   81326 retry.go:31] will retry after 440.442086ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.213614   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:55:57.266588   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.266622   81326 retry.go:31] will retry after 403.844371ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.499702   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:55:57.552130   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.552176   81326 retry.go:31] will retry after 1.153605517s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.671430   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:55:57.724158   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.724189   81326 retry.go:31] will retry after 1.186791372s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:55:58.317829   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:55:58.706293   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:55:58.758710   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:58.758743   81326 retry.go:31] will retry after 1.743017897s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:58.911763   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:55:58.963731   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:58.963764   81326 retry.go:31] will retry after 777.451228ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:59.742307   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:55:59.794404   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:59.794436   81326 retry.go:31] will retry after 1.290318475s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:00.318311   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:00.502629   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:56:00.555745   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:00.555777   81326 retry.go:31] will retry after 2.524197607s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:01.084941   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:56:01.136443   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:01.136470   81326 retry.go:31] will retry after 1.577041718s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:02.713959   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:56:02.768944   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:02.769029   81326 retry.go:31] will retry after 2.739822337s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:02.817505   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:03.080936   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:56:03.135285   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:03.135312   81326 retry.go:31] will retry after 2.274306578s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:04.818421   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:05.409777   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:56:05.464614   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:05.464653   81326 retry.go:31] will retry after 2.562562636s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:05.509838   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:56:05.563089   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:05.563116   81326 retry.go:31] will retry after 7.257063106s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:07.317778   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:08.028172   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:56:08.085551   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:08.085584   81326 retry.go:31] will retry after 5.304285212s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:09.817756   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:12.317749   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:12.820933   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:56:12.874853   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:12.874882   81326 retry.go:31] will retry after 14.146267666s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:13.390058   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:56:13.445661   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:13.445690   81326 retry.go:31] will retry after 12.009663375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:14.317787   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:16.817710   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:19.317672   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:21.817409   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:23.817676   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:25.455892   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:56:25.508571   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:25.508602   81326 retry.go:31] will retry after 16.328819921s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:26.317511   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:27.021826   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:56:27.074125   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:27.074173   81326 retry.go:31] will retry after 16.507388606s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:28.317794   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:30.318017   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:32.318056   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:34.318298   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:36.817418   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:38.817580   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:41.317407   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:41.838597   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:56:41.891282   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:41.891335   81326 retry.go:31] will retry after 22.626101475s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:43.317591   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:43.581928   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:56:43.635774   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:43.635813   81326 retry.go:31] will retry after 29.761890826s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:45.317977   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:47.817468   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:49.817818   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:51.818094   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:54.317753   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:56.318437   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:58.817821   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:01.317707   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:03.318244   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:57:04.517790   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:57:04.570824   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:57:04.570858   81326 retry.go:31] will retry after 21.453197357s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:57:05.817503   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:07.817615   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:10.317488   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:12.318329   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:57:13.398664   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:57:13.451327   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:57:13.451363   81326 retry.go:31] will retry after 18.539744202s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:57:14.817577   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:16.818008   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:19.317830   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:21.817431   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:23.817855   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:57:26.024797   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:57:26.087554   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:57:26.087679   81326 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1009 18:57:26.317736   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:28.817913   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:31.317538   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:57:31.991746   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:57:32.045057   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:57:32.045194   81326 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 18:57:32.046928   81326 out.go:179] * Enabled addons: 
	I1009 18:57:32.048585   81326 addons.go:514] duration metric: took 1m35.852737584s for enable addons: enabled=[]
	W1009 18:57:33.318217   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:35.817737   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:38.317513   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:40.317684   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:42.317825   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:44.817551   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:46.818116   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:49.317639   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:51.317685   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:53.318174   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:55.817487   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:57.817583   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:00.317490   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:02.817473   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:04.817686   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:06.817818   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:08.817877   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:11.317640   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:13.817637   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:16.317565   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:18.817617   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:21.317386   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:23.817678   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:26.317892   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:28.817872   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:31.317398   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:33.817489   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:36.317642   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:38.817932   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:41.317687   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:43.817793   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:46.318079   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:48.318161   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:50.818473   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:53.317422   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:55.317574   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:57.817543   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:00.317518   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:02.818366   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:05.317450   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:07.817392   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:09.817445   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:11.818236   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:13.818364   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:16.317528   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:18.318349   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:20.817329   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:22.817369   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:24.817503   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:26.817612   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:29.317514   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:31.318361   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:33.817552   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:36.317715   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:38.817633   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:41.317409   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:43.817587   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:45.818329   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:48.317431   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:50.318318   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:52.817414   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:54.817507   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:57.318094   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:59.817580   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:02.317437   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:04.317609   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:06.317877   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:08.817785   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:11.317477   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:13.817723   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:16.317801   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:18.817941   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:21.317577   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:23.817596   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:26.317493   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:28.318451   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:30.817458   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:33.317469   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:35.817460   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:37.817554   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:40.317422   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:42.318284   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:44.817384   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:46.817464   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:48.817666   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:51.317590   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:53.817590   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:56.317720   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:58.817684   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:01.317456   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:03.817462   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:06.317488   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:08.318424   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:10.817449   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:13.317375   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:15.317528   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:17.817472   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:19.817717   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:22.317518   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:24.817464   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:26.817536   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:28.817651   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:31.317524   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:33.817581   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:36.317453   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:38.817454   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:40.818307   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:43.318376   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:45.818334   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:48.318380   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:50.817713   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:53.317436   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:55.818044   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:01:56.318011   81326 node_ready.go:38] duration metric: took 6m0.001141049s for node "ha-608611" to be "Ready" ...
	I1009 19:01:56.320179   81326 out.go:203] 
	W1009 19:01:56.321631   81326 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 19:01:56.321647   81326 out.go:285] * 
	W1009 19:01:56.323308   81326 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:01:56.324645   81326 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:01:52 ha-608611 crio[519]: time="2025-10-09T19:01:52.072459628Z" level=info msg="createCtr: deleting container cee82ca096ada2745ddfd20399be0511826b61e086329781f9b2247a3f7121f6 from storage" id=c50413e0-d421-4959-905a-4d1005a1f36a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:52 ha-608611 crio[519]: time="2025-10-09T19:01:52.074921498Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-608611_kube-system_b479c8e1034fd1754049af8325a8c50b_0" id=db4c736a-2288-48c8-8b24-876eaba6d487 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:52 ha-608611 crio[519]: time="2025-10-09T19:01:52.075219493Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-608611_kube-system_cc9d45d79042caf53449ab6317965aad_0" id=c50413e0-d421-4959-905a-4d1005a1f36a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.045242534Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=28e234cb-d240-47e3-869e-ed1c2e16a7cc name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.045371754Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=d6a9ff02-69dc-418a-8463-e96a6076d37d name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.046045414Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=d2871d16-d577-4a09-a337-69cf3392fbd6 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.04607563Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=d9af252a-ed9a-4f37-b905-7bd43bd870b7 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.047302632Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-608611/kube-apiserver" id=713646c7-d288-4595-b7d5-3672898aaa27 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.047703525Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.048079465Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-608611/kube-scheduler" id=bcbc9d53-4f9c-4dcb-80b9-c1c94f638967 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.048730686Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.053214604Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.053763132Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.055064813Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.055575645Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.073197289Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=713646c7-d288-4595-b7d5-3672898aaa27 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.074521483Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=bcbc9d53-4f9c-4dcb-80b9-c1c94f638967 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.074607421Z" level=info msg="createCtr: deleting container ID e9f05c8892ff26fa5ccef86a659c31e9226fe862a51863149b001698759aacb7 from idIndex" id=713646c7-d288-4595-b7d5-3672898aaa27 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.074648652Z" level=info msg="createCtr: removing container e9f05c8892ff26fa5ccef86a659c31e9226fe862a51863149b001698759aacb7" id=713646c7-d288-4595-b7d5-3672898aaa27 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.074688721Z" level=info msg="createCtr: deleting container e9f05c8892ff26fa5ccef86a659c31e9226fe862a51863149b001698759aacb7 from storage" id=713646c7-d288-4595-b7d5-3672898aaa27 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.075873597Z" level=info msg="createCtr: deleting container ID e0a6aa110e439ce93809dcc873bdb0ebf7b51a92ab1d8acad64b2c5a5ad954da from idIndex" id=bcbc9d53-4f9c-4dcb-80b9-c1c94f638967 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.075908928Z" level=info msg="createCtr: removing container e0a6aa110e439ce93809dcc873bdb0ebf7b51a92ab1d8acad64b2c5a5ad954da" id=bcbc9d53-4f9c-4dcb-80b9-c1c94f638967 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.075943992Z" level=info msg="createCtr: deleting container e0a6aa110e439ce93809dcc873bdb0ebf7b51a92ab1d8acad64b2c5a5ad954da from storage" id=bcbc9d53-4f9c-4dcb-80b9-c1c94f638967 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.077834944Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-608611_kube-system_8c1c5aee1432fcfd0e6519753fb0d668_0" id=713646c7-d288-4595-b7d5-3672898aaa27 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.078091888Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-608611_kube-system_aa829d6ea417a48ecaa6f5cad3254d94_0" id=bcbc9d53-4f9c-4dcb-80b9-c1c94f638967 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:01:59.036481    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:01:59.036947    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:01:59.038571    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:01:59.039131    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:01:59.040761    2194 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:01:59 up  1:44,  0 user,  load average: 0.84, 0.44, 0.21
	Linux ha-608611 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:01:52 ha-608611 kubelet[669]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-608611_kube-system(cc9d45d79042caf53449ab6317965aad): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:01:52 ha-608611 kubelet[669]:  > logger="UnhandledError"
	Oct 09 19:01:52 ha-608611 kubelet[669]: E1009 19:01:52.076638     669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-608611" podUID="cc9d45d79042caf53449ab6317965aad"
	Oct 09 19:01:53 ha-608611 kubelet[669]: E1009 19:01:53.044824     669 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 19:01:53 ha-608611 kubelet[669]: E1009 19:01:53.044946     669 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 19:01:53 ha-608611 kubelet[669]: E1009 19:01:53.078187     669 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:01:53 ha-608611 kubelet[669]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:01:53 ha-608611 kubelet[669]:  > podSandboxID="2aa0bb22fe65d4986dc9aea3a26f98b8fe8d898e11d03753d94e780f1d08d143"
	Oct 09 19:01:53 ha-608611 kubelet[669]: E1009 19:01:53.078296     669 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:01:53 ha-608611 kubelet[669]:         container kube-apiserver start failed in pod kube-apiserver-ha-608611_kube-system(8c1c5aee1432fcfd0e6519753fb0d668): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:01:53 ha-608611 kubelet[669]:  > logger="UnhandledError"
	Oct 09 19:01:53 ha-608611 kubelet[669]: E1009 19:01:53.078325     669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-608611" podUID="8c1c5aee1432fcfd0e6519753fb0d668"
	Oct 09 19:01:53 ha-608611 kubelet[669]: E1009 19:01:53.078326     669 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:01:53 ha-608611 kubelet[669]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:01:53 ha-608611 kubelet[669]:  > podSandboxID="be3d21cea8492905ced72270cf5ee2be1474dc62f2c5be112263d2c070371c32"
	Oct 09 19:01:53 ha-608611 kubelet[669]: E1009 19:01:53.078407     669 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:01:53 ha-608611 kubelet[669]:         container kube-scheduler start failed in pod kube-scheduler-ha-608611_kube-system(aa829d6ea417a48ecaa6f5cad3254d94): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:01:53 ha-608611 kubelet[669]:  > logger="UnhandledError"
	Oct 09 19:01:53 ha-608611 kubelet[669]: E1009 19:01:53.079540     669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-608611" podUID="aa829d6ea417a48ecaa6f5cad3254d94"
	Oct 09 19:01:55 ha-608611 kubelet[669]: E1009 19:01:55.058005     669 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-608611\" not found"
	Oct 09 19:01:55 ha-608611 kubelet[669]: E1009 19:01:55.543425     669 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 09 19:01:56 ha-608611 kubelet[669]: E1009 19:01:56.136499     669 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-608611.186ce78ed4c19733  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-608611,UID:ha-608611,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-608611 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-608611,},FirstTimestamp:2025-10-09 18:55:55.035850547 +0000 UTC m=+0.073552050,LastTimestamp:2025-10-09 18:55:55.035850547 +0000 UTC m=+0.073552050,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-608611,}"
	Oct 09 19:01:57 ha-608611 kubelet[669]: E1009 19:01:57.687531     669 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-608611?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:01:57 ha-608611 kubelet[669]: I1009 19:01:57.852555     669 kubelet_node_status.go:75] "Attempting to register node" node="ha-608611"
	Oct 09 19:01:57 ha-608611 kubelet[669]: E1009 19:01:57.852833     669 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-608611"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611: exit status 2 (294.902953ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-608611" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (1.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-608611" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-608611\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-608611\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-608611\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":nul
l,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list
--output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-608611
helpers_test.go:243: (dbg) docker inspect ha-608611:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	        "Created": "2025-10-09T18:44:43.71277862Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 81525,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:55:49.004659898Z",
	            "FinishedAt": "2025-10-09T18:55:47.866160923Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hostname",
	        "HostsPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hosts",
	        "LogPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c-json.log",
	        "Name": "/ha-608611",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-608611:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-608611",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	                "LowerDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-608611",
	                "Source": "/var/lib/docker/volumes/ha-608611/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-608611",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-608611",
	                "name.minikube.sigs.k8s.io": "ha-608611",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bd78a6800d0ca67ea1af19252b5bd24a3e3fc828387489071234de54472900f3",
	            "SandboxKey": "/var/run/docker/netns/bd78a6800d0c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-608611": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:63:77:d6:c6:07",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d41ad8abecfe5e57fea462a2d7f6665aa3879de8bfc3fe0269f712186c14e257",
	                    "EndpointID": "8c9e2b0ece853c05aed38cc16cf83246ef35859c6d45bb06281e9e29114c856e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-608611",
	                        "92fc23109156"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611: exit status 2 (287.464598ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                    ARGS                                     │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-608611 kubectl -- rollout status deployment/busybox                      │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- exec  -- nslookup kubernetes.io                        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default                   │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node    │ ha-608611 node add --alsologtostderr -v 5                                   │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node    │ ha-608611 node stop m02 --alsologtostderr -v 5                              │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node    │ ha-608611 node start m02 --alsologtostderr -v 5                             │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node    │ ha-608611 node list --alsologtostderr -v 5                                  │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:55 UTC │                     │
	│ stop    │ ha-608611 stop --alsologtostderr -v 5                                       │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:55 UTC │ 09 Oct 25 18:55 UTC │
	│ start   │ ha-608611 start --wait true --alsologtostderr -v 5                          │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:55 UTC │                     │
	│ node    │ ha-608611 node list --alsologtostderr -v 5                                  │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │                     │
	│ node    │ ha-608611 node delete m03 --alsologtostderr -v 5                            │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 18:55:48
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:55:48.782369   81326 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:55:48.782604   81326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:55:48.782612   81326 out.go:374] Setting ErrFile to fd 2...
	I1009 18:55:48.782616   81326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:55:48.782782   81326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:55:48.783226   81326 out.go:368] Setting JSON to false
	I1009 18:55:48.784053   81326 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5897,"bootTime":1760030252,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:55:48.784156   81326 start.go:141] virtualization: kvm guest
	I1009 18:55:48.786563   81326 out.go:179] * [ha-608611] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:55:48.788077   81326 notify.go:220] Checking for updates...
	I1009 18:55:48.788126   81326 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:55:48.789665   81326 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:55:48.791095   81326 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:55:48.792613   81326 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:55:48.794226   81326 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:55:48.795794   81326 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:55:48.797638   81326 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:55:48.797748   81326 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:55:48.820855   81326 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:55:48.820923   81326 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:55:48.876094   81326 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:55:48.866734643 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:55:48.876204   81326 docker.go:318] overlay module found
	I1009 18:55:48.877913   81326 out.go:179] * Using the docker driver based on existing profile
	I1009 18:55:48.879222   81326 start.go:305] selected driver: docker
	I1009 18:55:48.879244   81326 start.go:925] validating driver "docker" against &{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:55:48.879315   81326 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:55:48.879420   81326 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:55:48.933369   81326 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 18:55:48.924148795 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:55:48.933987   81326 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:55:48.934014   81326 cni.go:84] Creating CNI manager for ""
	I1009 18:55:48.934075   81326 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 18:55:48.934183   81326 start.go:349] cluster config:
	{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1009 18:55:48.936388   81326 out.go:179] * Starting "ha-608611" primary control-plane node in "ha-608611" cluster
	I1009 18:55:48.937951   81326 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 18:55:48.939231   81326 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 18:55:48.940352   81326 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:55:48.940388   81326 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:55:48.940398   81326 cache.go:64] Caching tarball of preloaded images
	I1009 18:55:48.940435   81326 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 18:55:48.940519   81326 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:55:48.940534   81326 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 18:55:48.940631   81326 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:55:48.960098   81326 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 18:55:48.960121   81326 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 18:55:48.960153   81326 cache.go:242] Successfully downloaded all kic artifacts
	I1009 18:55:48.960177   81326 start.go:360] acquireMachinesLock for ha-608611: {Name:mk7579977ab708dc80cadd5f1683dbd9d0a08d4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:55:48.960231   81326 start.go:364] duration metric: took 36.84µs to acquireMachinesLock for "ha-608611"
	I1009 18:55:48.960251   81326 start.go:96] Skipping create...Using existing machine configuration
	I1009 18:55:48.960256   81326 fix.go:54] fixHost starting: 
	I1009 18:55:48.960457   81326 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:55:48.977497   81326 fix.go:112] recreateIfNeeded on ha-608611: state=Stopped err=<nil>
	W1009 18:55:48.977523   81326 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 18:55:48.979512   81326 out.go:252] * Restarting existing docker container for "ha-608611" ...
	I1009 18:55:48.979585   81326 cli_runner.go:164] Run: docker start ha-608611
	I1009 18:55:49.217604   81326 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:55:49.237615   81326 kic.go:430] container "ha-608611" state is running.
	I1009 18:55:49.238028   81326 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:55:49.257124   81326 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 18:55:49.257381   81326 machine.go:93] provisionDockerMachine start ...
	I1009 18:55:49.257452   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:49.276711   81326 main.go:141] libmachine: Using SSH client type: native
	I1009 18:55:49.276957   81326 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 18:55:49.276972   81326 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:55:49.277652   81326 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45654->127.0.0.1:32788: read: connection reset by peer
	I1009 18:55:52.425271   81326 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:55:52.425302   81326 ubuntu.go:182] provisioning hostname "ha-608611"
	I1009 18:55:52.425356   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:52.443305   81326 main.go:141] libmachine: Using SSH client type: native
	I1009 18:55:52.443509   81326 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 18:55:52.443521   81326 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-608611 && echo "ha-608611" | sudo tee /etc/hostname
	I1009 18:55:52.597559   81326 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 18:55:52.597633   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:52.615251   81326 main.go:141] libmachine: Using SSH client type: native
	I1009 18:55:52.615459   81326 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 18:55:52.615476   81326 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-608611' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-608611/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-608611' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:55:52.760759   81326 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:55:52.760787   81326 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 18:55:52.760833   81326 ubuntu.go:190] setting up certificates
	I1009 18:55:52.760848   81326 provision.go:84] configureAuth start
	I1009 18:55:52.760892   81326 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:55:52.778450   81326 provision.go:143] copyHostCerts
	I1009 18:55:52.778486   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:55:52.778529   81326 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 18:55:52.778546   81326 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 18:55:52.778622   81326 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 18:55:52.778743   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:55:52.778772   81326 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 18:55:52.778782   81326 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 18:55:52.778825   81326 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 18:55:52.778905   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:55:52.778928   81326 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 18:55:52.778938   81326 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 18:55:52.778979   81326 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 18:55:52.779124   81326 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.ha-608611 san=[127.0.0.1 192.168.49.2 ha-608611 localhost minikube]
	I1009 18:55:52.921150   81326 provision.go:177] copyRemoteCerts
	I1009 18:55:52.921251   81326 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:55:52.921302   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:52.938746   81326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:53.041424   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 18:55:53.041487   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 18:55:53.059403   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 18:55:53.059465   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 18:55:53.077545   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 18:55:53.077599   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:55:53.095069   81326 provision.go:87] duration metric: took 334.207036ms to configureAuth
	I1009 18:55:53.095112   81326 ubuntu.go:206] setting minikube options for container-runtime
	I1009 18:55:53.095285   81326 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:55:53.095376   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:53.113012   81326 main.go:141] libmachine: Using SSH client type: native
	I1009 18:55:53.113249   81326 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1009 18:55:53.113266   81326 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:55:53.371650   81326 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:55:53.371676   81326 machine.go:96] duration metric: took 4.114278074s to provisionDockerMachine
	I1009 18:55:53.371688   81326 start.go:293] postStartSetup for "ha-608611" (driver="docker")
	I1009 18:55:53.371701   81326 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:55:53.371771   81326 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:55:53.371842   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:53.390223   81326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:53.493994   81326 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:55:53.497842   81326 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:55:53.497867   81326 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 18:55:53.497877   81326 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 18:55:53.497926   81326 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 18:55:53.498003   81326 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 18:55:53.498014   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /etc/ssl/certs/148802.pem
	I1009 18:55:53.498111   81326 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 18:55:53.506094   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:55:53.524346   81326 start.go:296] duration metric: took 152.640721ms for postStartSetup
	I1009 18:55:53.524419   81326 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:55:53.524480   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:53.542600   81326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:53.642517   81326 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:55:53.646989   81326 fix.go:56] duration metric: took 4.686726649s for fixHost
	I1009 18:55:53.647050   81326 start.go:83] releasing machines lock for "ha-608611", held for 4.686806047s
	I1009 18:55:53.647103   81326 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 18:55:53.665515   81326 ssh_runner.go:195] Run: cat /version.json
	I1009 18:55:53.665578   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:53.665620   81326 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:55:53.665678   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:53.684362   81326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:53.684684   81326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:53.836250   81326 ssh_runner.go:195] Run: systemctl --version
	I1009 18:55:53.842642   81326 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:55:53.877786   81326 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:55:53.882350   81326 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:55:53.882415   81326 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:55:53.890015   81326 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 18:55:53.890039   81326 start.go:495] detecting cgroup driver to use...
	I1009 18:55:53.890072   81326 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 18:55:53.890126   81326 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:55:53.903830   81326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:55:53.915636   81326 docker.go:218] disabling cri-docker service (if available) ...
	I1009 18:55:53.915680   81326 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:55:53.929373   81326 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:55:53.941718   81326 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:55:54.017230   81326 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:55:54.097019   81326 docker.go:234] disabling docker service ...
	I1009 18:55:54.097119   81326 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:55:54.110968   81326 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:55:54.123470   81326 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:55:54.198047   81326 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:55:54.273477   81326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:55:54.285686   81326 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:55:54.299501   81326 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 18:55:54.299553   81326 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:55:54.307932   81326 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 18:55:54.307990   81326 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:55:54.316516   81326 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:55:54.324850   81326 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:55:54.333127   81326 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:55:54.340857   81326 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:55:54.349439   81326 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:55:54.357872   81326 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:55:54.367094   81326 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:55:54.374845   81326 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:55:54.382734   81326 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:55:54.461355   81326 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:55:54.565572   81326 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:55:54.565624   81326 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:55:54.571180   81326 start.go:563] Will wait 60s for crictl version
	I1009 18:55:54.571234   81326 ssh_runner.go:195] Run: which crictl
	I1009 18:55:54.574912   81326 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 18:55:54.598972   81326 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 18:55:54.599070   81326 ssh_runner.go:195] Run: crio --version
	I1009 18:55:54.626916   81326 ssh_runner.go:195] Run: crio --version
	I1009 18:55:54.656626   81326 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 18:55:54.658243   81326 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:55:54.675913   81326 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:55:54.680110   81326 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:55:54.690492   81326 kubeadm.go:883] updating cluster {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:55:54.690604   81326 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 18:55:54.690644   81326 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:55:54.722701   81326 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:55:54.722720   81326 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:55:54.722761   81326 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:55:54.747850   81326 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:55:54.747875   81326 cache_images.go:85] Images are preloaded, skipping loading
	I1009 18:55:54.747882   81326 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 18:55:54.748003   81326 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-608611 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:55:54.748077   81326 ssh_runner.go:195] Run: crio config
	I1009 18:55:54.792222   81326 cni.go:84] Creating CNI manager for ""
	I1009 18:55:54.792240   81326 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 18:55:54.792253   81326 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 18:55:54.792274   81326 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-608611 NodeName:ha-608611 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:55:54.792387   81326 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-608611"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:55:54.792445   81326 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 18:55:54.800546   81326 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 18:55:54.800612   81326 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:55:54.808306   81326 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 18:55:54.820571   81326 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:55:54.832686   81326 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 18:55:54.845124   81326 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 18:55:54.848713   81326 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:55:54.858608   81326 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:55:54.936048   81326 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:55:54.960660   81326 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611 for IP: 192.168.49.2
	I1009 18:55:54.960682   81326 certs.go:195] generating shared ca certs ...
	I1009 18:55:54.960703   81326 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:55:54.960866   81326 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 18:55:54.960929   81326 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 18:55:54.960943   81326 certs.go:257] generating profile certs ...
	I1009 18:55:54.961058   81326 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key
	I1009 18:55:54.961104   81326 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.71ac3d0a
	I1009 18:55:54.961152   81326 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.71ac3d0a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1009 18:55:55.543578   81326 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.71ac3d0a ...
	I1009 18:55:55.543608   81326 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.71ac3d0a: {Name:mk997984d16894bde965cc8b9fac1d81fe6f4952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:55:55.543774   81326 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.71ac3d0a ...
	I1009 18:55:55.543787   81326 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.71ac3d0a: {Name:mk0466ac68a27af88f893685594376a4479a0b52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:55:55.543856   81326 certs.go:382] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt.71ac3d0a -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt
	I1009 18:55:55.543984   81326 certs.go:386] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.71ac3d0a -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key
	I1009 18:55:55.544117   81326 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key
	I1009 18:55:55.544131   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 18:55:55.544165   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 18:55:55.544184   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 18:55:55.544201   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 18:55:55.544214   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 18:55:55.544227   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 18:55:55.544240   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 18:55:55.544255   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 18:55:55.544302   81326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 18:55:55.544330   81326 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 18:55:55.544341   81326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 18:55:55.544368   81326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 18:55:55.544389   81326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:55:55.544410   81326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 18:55:55.544447   81326 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 18:55:55.544473   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:55:55.544487   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem -> /usr/share/ca-certificates/14880.pem
	I1009 18:55:55.544500   81326 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /usr/share/ca-certificates/148802.pem
	I1009 18:55:55.545009   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:55:55.563316   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 18:55:55.580320   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:55:55.597589   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 18:55:55.614398   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1009 18:55:55.631965   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:55:55.648792   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:55:55.666094   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:55:55.683111   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:55:55.700010   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 18:55:55.717340   81326 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 18:55:55.734654   81326 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:55:55.747411   81326 ssh_runner.go:195] Run: openssl version
	I1009 18:55:55.753470   81326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:55:55.761715   81326 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:55:55.765434   81326 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:55:55.765492   81326 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:55:55.798918   81326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:55:55.807085   81326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 18:55:55.815823   81326 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 18:55:55.819621   81326 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 18:55:55.819677   81326 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 18:55:55.854342   81326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 18:55:55.862610   81326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 18:55:55.870964   81326 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 18:55:55.874789   81326 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 18:55:55.874839   81326 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 18:55:55.909615   81326 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 18:55:55.918204   81326 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:55:55.922181   81326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 18:55:55.956689   81326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 18:55:55.991888   81326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 18:55:56.025768   81326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 18:55:56.066085   81326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 18:55:56.107192   81326 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 18:55:56.142373   81326 kubeadm.go:400] StartCluster: {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:55:56.142453   81326 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:55:56.142506   81326 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:55:56.169309   81326 cri.go:89] found id: ""
	I1009 18:55:56.169373   81326 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:55:56.177273   81326 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 18:55:56.177294   81326 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 18:55:56.177352   81326 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 18:55:56.184818   81326 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 18:55:56.185183   81326 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:55:56.185297   81326 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-11374/kubeconfig needs updating (will repair): [kubeconfig missing "ha-608611" cluster setting kubeconfig missing "ha-608611" context setting]
	I1009 18:55:56.185607   81326 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/kubeconfig: {Name:mke7bf8fc0811179129dfd61e3a963860adf8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:55:56.186078   81326 kapi.go:59] client config for ha-608611: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 18:55:56.186554   81326 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 18:55:56.186572   81326 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 18:55:56.186576   81326 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 18:55:56.186579   81326 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 18:55:56.186582   81326 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 18:55:56.186644   81326 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 18:55:56.186913   81326 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 18:55:56.194885   81326 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 18:55:56.194918   81326 kubeadm.go:601] duration metric: took 17.618968ms to restartPrimaryControlPlane
	I1009 18:55:56.194926   81326 kubeadm.go:402] duration metric: took 52.565569ms to StartCluster
	I1009 18:55:56.194954   81326 settings.go:142] acquiring lock: {Name:mke1fc24bd3c282bdce5b5999d4611ed242ead0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:55:56.195014   81326 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:55:56.195534   81326 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/kubeconfig: {Name:mke7bf8fc0811179129dfd61e3a963860adf8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:55:56.195769   81326 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:55:56.195852   81326 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 18:55:56.195922   81326 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:55:56.195932   81326 addons.go:69] Setting default-storageclass=true in profile "ha-608611"
	I1009 18:55:56.195965   81326 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-608611"
	I1009 18:55:56.195925   81326 addons.go:69] Setting storage-provisioner=true in profile "ha-608611"
	I1009 18:55:56.195992   81326 addons.go:238] Setting addon storage-provisioner=true in "ha-608611"
	I1009 18:55:56.196019   81326 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:55:56.196264   81326 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:55:56.196400   81326 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:55:56.199213   81326 out.go:179] * Verifying Kubernetes components...
	I1009 18:55:56.200413   81326 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:55:56.216177   81326 kapi.go:59] client config for ha-608611: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 18:55:56.216605   81326 addons.go:238] Setting addon default-storageclass=true in "ha-608611"
	I1009 18:55:56.216648   81326 host.go:66] Checking if "ha-608611" exists ...
	I1009 18:55:56.217159   81326 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 18:55:56.217419   81326 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 18:55:56.219196   81326 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:55:56.219223   81326 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 18:55:56.219282   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:56.243943   81326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:56.245925   81326 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 18:55:56.245944   81326 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 18:55:56.245984   81326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 18:55:56.263872   81326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 18:55:56.303542   81326 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:55:56.316831   81326 node_ready.go:35] waiting up to 6m0s for node "ha-608611" to be "Ready" ...
	I1009 18:55:56.352305   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:55:56.372260   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:55:56.412054   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:56.412090   81326 retry.go:31] will retry after 210.547469ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:55:56.427954   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:56.427986   81326 retry.go:31] will retry after 365.761186ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:56.623265   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:55:56.675568   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:56.675605   81326 retry.go:31] will retry after 331.492885ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:56.794903   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:55:56.846158   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:56.846190   81326 retry.go:31] will retry after 366.903412ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.007285   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:55:57.058254   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.058285   81326 retry.go:31] will retry after 440.442086ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.213614   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:55:57.266588   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.266622   81326 retry.go:31] will retry after 403.844371ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.499702   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:55:57.552130   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.552176   81326 retry.go:31] will retry after 1.153605517s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.671430   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:55:57.724158   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:57.724189   81326 retry.go:31] will retry after 1.186791372s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:55:58.317829   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:55:58.706293   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:55:58.758710   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:58.758743   81326 retry.go:31] will retry after 1.743017897s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:58.911763   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:55:58.963731   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:58.963764   81326 retry.go:31] will retry after 777.451228ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:59.742307   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:55:59.794404   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:55:59.794436   81326 retry.go:31] will retry after 1.290318475s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:00.318311   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:00.502629   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:56:00.555745   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:00.555777   81326 retry.go:31] will retry after 2.524197607s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:01.084941   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:56:01.136443   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:01.136470   81326 retry.go:31] will retry after 1.577041718s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:02.713959   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:56:02.768944   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:02.769029   81326 retry.go:31] will retry after 2.739822337s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:02.817505   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:03.080936   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:56:03.135285   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:03.135312   81326 retry.go:31] will retry after 2.274306578s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:04.818421   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:05.409777   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:56:05.464614   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:05.464653   81326 retry.go:31] will retry after 2.562562636s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:05.509838   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:56:05.563089   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:05.563116   81326 retry.go:31] will retry after 7.257063106s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:07.317778   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:08.028172   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:56:08.085551   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:08.085584   81326 retry.go:31] will retry after 5.304285212s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:09.817756   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:12.317749   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:12.820933   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:56:12.874853   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:12.874882   81326 retry.go:31] will retry after 14.146267666s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:13.390058   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:56:13.445661   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:13.445690   81326 retry.go:31] will retry after 12.009663375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:14.317787   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:16.817710   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:19.317672   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:21.817409   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:23.817676   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:25.455892   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:56:25.508571   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:25.508602   81326 retry.go:31] will retry after 16.328819921s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:26.317511   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:27.021826   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:56:27.074125   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:27.074173   81326 retry.go:31] will retry after 16.507388606s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:28.317794   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:30.318017   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:32.318056   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:34.318298   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:36.817418   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:38.817580   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:41.317407   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:41.838597   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:56:41.891282   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:41.891335   81326 retry.go:31] will retry after 22.626101475s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:43.317591   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:56:43.581928   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:56:43.635774   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:56:43.635813   81326 retry.go:31] will retry after 29.761890826s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:56:45.317977   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:47.817468   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:49.817818   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:51.818094   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:54.317753   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:56.318437   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:56:58.817821   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:01.317707   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:03.318244   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:57:04.517790   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:57:04.570824   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:57:04.570858   81326 retry.go:31] will retry after 21.453197357s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:57:05.817503   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:07.817615   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:10.317488   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:12.318329   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:57:13.398664   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:57:13.451327   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 18:57:13.451363   81326 retry.go:31] will retry after 18.539744202s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:57:14.817577   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:16.818008   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:19.317830   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:21.817431   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:23.817855   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:57:26.024797   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 18:57:26.087554   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:57:26.087679   81326 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1009 18:57:26.317736   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:28.817913   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:31.317538   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 18:57:31.991746   81326 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 18:57:32.045057   81326 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 18:57:32.045194   81326 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 18:57:32.046928   81326 out.go:179] * Enabled addons: 
	I1009 18:57:32.048585   81326 addons.go:514] duration metric: took 1m35.852737584s for enable addons: enabled=[]
	W1009 18:57:33.318217   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:35.817737   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:38.317513   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:40.317684   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:42.317825   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:44.817551   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:46.818116   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:49.317639   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:51.317685   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:53.318174   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:55.817487   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:57:57.817583   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:00.317490   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:02.817473   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:04.817686   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:06.817818   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:08.817877   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:11.317640   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:13.817637   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:16.317565   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:18.817617   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:21.317386   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:23.817678   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:26.317892   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:28.817872   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:31.317398   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:33.817489   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:36.317642   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:38.817932   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:41.317687   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:43.817793   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:46.318079   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:48.318161   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:50.818473   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:53.317422   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:55.317574   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:58:57.817543   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:00.317518   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:02.818366   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:05.317450   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:07.817392   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:09.817445   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:11.818236   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:13.818364   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:16.317528   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:18.318349   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:20.817329   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:22.817369   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:24.817503   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:26.817612   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:29.317514   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:31.318361   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:33.817552   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:36.317715   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:38.817633   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:41.317409   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:43.817587   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:45.818329   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:48.317431   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:50.318318   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:52.817414   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:54.817507   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:57.318094   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 18:59:59.817580   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:02.317437   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:04.317609   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:06.317877   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:08.817785   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:11.317477   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:13.817723   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:16.317801   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:18.817941   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:21.317577   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:23.817596   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:26.317493   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:28.318451   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:30.817458   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:33.317469   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:35.817460   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:37.817554   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:40.317422   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:42.318284   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:44.817384   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:46.817464   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:48.817666   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:51.317590   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:53.817590   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:56.317720   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:00:58.817684   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:01.317456   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:03.817462   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:06.317488   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:08.318424   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:10.817449   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:13.317375   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:15.317528   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:17.817472   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:19.817717   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:22.317518   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:24.817464   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:26.817536   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:28.817651   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:31.317524   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:33.817581   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:36.317453   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:38.817454   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:40.818307   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:43.318376   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:45.818334   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:48.318380   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:50.817713   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:53.317436   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:01:55.818044   81326 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:01:56.318011   81326 node_ready.go:38] duration metric: took 6m0.001141049s for node "ha-608611" to be "Ready" ...
	I1009 19:01:56.320179   81326 out.go:203] 
	W1009 19:01:56.321631   81326 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 19:01:56.321647   81326 out.go:285] * 
	W1009 19:01:56.323308   81326 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:01:56.324645   81326 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:01:52 ha-608611 crio[519]: time="2025-10-09T19:01:52.072459628Z" level=info msg="createCtr: deleting container cee82ca096ada2745ddfd20399be0511826b61e086329781f9b2247a3f7121f6 from storage" id=c50413e0-d421-4959-905a-4d1005a1f36a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:52 ha-608611 crio[519]: time="2025-10-09T19:01:52.074921498Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-608611_kube-system_b479c8e1034fd1754049af8325a8c50b_0" id=db4c736a-2288-48c8-8b24-876eaba6d487 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:52 ha-608611 crio[519]: time="2025-10-09T19:01:52.075219493Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-608611_kube-system_cc9d45d79042caf53449ab6317965aad_0" id=c50413e0-d421-4959-905a-4d1005a1f36a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.045242534Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=28e234cb-d240-47e3-869e-ed1c2e16a7cc name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.045371754Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=d6a9ff02-69dc-418a-8463-e96a6076d37d name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.046045414Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=d2871d16-d577-4a09-a337-69cf3392fbd6 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.04607563Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=d9af252a-ed9a-4f37-b905-7bd43bd870b7 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.047302632Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-608611/kube-apiserver" id=713646c7-d288-4595-b7d5-3672898aaa27 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.047703525Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.048079465Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-608611/kube-scheduler" id=bcbc9d53-4f9c-4dcb-80b9-c1c94f638967 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.048730686Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.053214604Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.053763132Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.055064813Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.055575645Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.073197289Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=713646c7-d288-4595-b7d5-3672898aaa27 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.074521483Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=bcbc9d53-4f9c-4dcb-80b9-c1c94f638967 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.074607421Z" level=info msg="createCtr: deleting container ID e9f05c8892ff26fa5ccef86a659c31e9226fe862a51863149b001698759aacb7 from idIndex" id=713646c7-d288-4595-b7d5-3672898aaa27 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.074648652Z" level=info msg="createCtr: removing container e9f05c8892ff26fa5ccef86a659c31e9226fe862a51863149b001698759aacb7" id=713646c7-d288-4595-b7d5-3672898aaa27 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.074688721Z" level=info msg="createCtr: deleting container e9f05c8892ff26fa5ccef86a659c31e9226fe862a51863149b001698759aacb7 from storage" id=713646c7-d288-4595-b7d5-3672898aaa27 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.075873597Z" level=info msg="createCtr: deleting container ID e0a6aa110e439ce93809dcc873bdb0ebf7b51a92ab1d8acad64b2c5a5ad954da from idIndex" id=bcbc9d53-4f9c-4dcb-80b9-c1c94f638967 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.075908928Z" level=info msg="createCtr: removing container e0a6aa110e439ce93809dcc873bdb0ebf7b51a92ab1d8acad64b2c5a5ad954da" id=bcbc9d53-4f9c-4dcb-80b9-c1c94f638967 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.075943992Z" level=info msg="createCtr: deleting container e0a6aa110e439ce93809dcc873bdb0ebf7b51a92ab1d8acad64b2c5a5ad954da from storage" id=bcbc9d53-4f9c-4dcb-80b9-c1c94f638967 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.077834944Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-608611_kube-system_8c1c5aee1432fcfd0e6519753fb0d668_0" id=713646c7-d288-4595-b7d5-3672898aaa27 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:01:53 ha-608611 crio[519]: time="2025-10-09T19:01:53.078091888Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-608611_kube-system_aa829d6ea417a48ecaa6f5cad3254d94_0" id=bcbc9d53-4f9c-4dcb-80b9-c1c94f638967 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:02:00.588711    2373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:02:00.589360    2373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:02:00.590967    2373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:02:00.591505    2373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:02:00.593061    2373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:02:00 up  1:44,  0 user,  load average: 0.85, 0.45, 0.22
	Linux ha-608611 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:01:52 ha-608611 kubelet[669]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-608611_kube-system(cc9d45d79042caf53449ab6317965aad): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:01:52 ha-608611 kubelet[669]:  > logger="UnhandledError"
	Oct 09 19:01:52 ha-608611 kubelet[669]: E1009 19:01:52.076638     669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-608611" podUID="cc9d45d79042caf53449ab6317965aad"
	Oct 09 19:01:53 ha-608611 kubelet[669]: E1009 19:01:53.044824     669 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 19:01:53 ha-608611 kubelet[669]: E1009 19:01:53.044946     669 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 19:01:53 ha-608611 kubelet[669]: E1009 19:01:53.078187     669 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:01:53 ha-608611 kubelet[669]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:01:53 ha-608611 kubelet[669]:  > podSandboxID="2aa0bb22fe65d4986dc9aea3a26f98b8fe8d898e11d03753d94e780f1d08d143"
	Oct 09 19:01:53 ha-608611 kubelet[669]: E1009 19:01:53.078296     669 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:01:53 ha-608611 kubelet[669]:         container kube-apiserver start failed in pod kube-apiserver-ha-608611_kube-system(8c1c5aee1432fcfd0e6519753fb0d668): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:01:53 ha-608611 kubelet[669]:  > logger="UnhandledError"
	Oct 09 19:01:53 ha-608611 kubelet[669]: E1009 19:01:53.078325     669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-608611" podUID="8c1c5aee1432fcfd0e6519753fb0d668"
	Oct 09 19:01:53 ha-608611 kubelet[669]: E1009 19:01:53.078326     669 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:01:53 ha-608611 kubelet[669]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:01:53 ha-608611 kubelet[669]:  > podSandboxID="be3d21cea8492905ced72270cf5ee2be1474dc62f2c5be112263d2c070371c32"
	Oct 09 19:01:53 ha-608611 kubelet[669]: E1009 19:01:53.078407     669 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:01:53 ha-608611 kubelet[669]:         container kube-scheduler start failed in pod kube-scheduler-ha-608611_kube-system(aa829d6ea417a48ecaa6f5cad3254d94): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:01:53 ha-608611 kubelet[669]:  > logger="UnhandledError"
	Oct 09 19:01:53 ha-608611 kubelet[669]: E1009 19:01:53.079540     669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-608611" podUID="aa829d6ea417a48ecaa6f5cad3254d94"
	Oct 09 19:01:55 ha-608611 kubelet[669]: E1009 19:01:55.058005     669 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-608611\" not found"
	Oct 09 19:01:55 ha-608611 kubelet[669]: E1009 19:01:55.543425     669 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 09 19:01:56 ha-608611 kubelet[669]: E1009 19:01:56.136499     669 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-608611.186ce78ed4c19733  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-608611,UID:ha-608611,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-608611 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-608611,},FirstTimestamp:2025-10-09 18:55:55.035850547 +0000 UTC m=+0.073552050,LastTimestamp:2025-10-09 18:55:55.035850547 +0000 UTC m=+0.073552050,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-608611,}"
	Oct 09 19:01:57 ha-608611 kubelet[669]: E1009 19:01:57.687531     669 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-608611?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:01:57 ha-608611 kubelet[669]: I1009 19:01:57.852555     669 kubelet_node_status.go:75] "Attempting to register node" node="ha-608611"
	Oct 09 19:01:57 ha-608611 kubelet[669]: E1009 19:01:57.852833     669 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-608611"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611: exit status 2 (289.312279ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-608611" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (1.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-608611 stop --alsologtostderr -v 5: (1.210984611s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5: exit status 7 (65.031846ms)

                                                
                                                
-- stdout --
	ha-608611
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:02:02.217781   86877 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:02:02.218043   86877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:02:02.218054   86877 out.go:374] Setting ErrFile to fd 2...
	I1009 19:02:02.218058   86877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:02:02.218330   86877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 19:02:02.218549   86877 out.go:368] Setting JSON to false
	I1009 19:02:02.218576   86877 mustload.go:65] Loading cluster: ha-608611
	I1009 19:02:02.218628   86877 notify.go:220] Checking for updates...
	I1009 19:02:02.219014   86877 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:02:02.219032   86877 status.go:174] checking status of ha-608611 ...
	I1009 19:02:02.219494   86877 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 19:02:02.237901   86877 status.go:371] ha-608611 host status = "Stopped" (err=<nil>)
	I1009 19:02:02.237923   86877 status.go:384] host is not running, skipping remaining checks
	I1009 19:02:02.237929   86877 status.go:176] ha-608611 status: &{Name:ha-608611 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5": ha-608611
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5": ha-608611
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-608611 status --alsologtostderr -v 5": ha-608611
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-608611
helpers_test.go:243: (dbg) docker inspect ha-608611:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	        "Created": "2025-10-09T18:44:43.71277862Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "",
	            "StartedAt": "2025-10-09T18:55:49.004659898Z",
	            "FinishedAt": "2025-10-09T19:02:01.288438646Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hostname",
	        "HostsPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hosts",
	        "LogPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c-json.log",
	        "Name": "/ha-608611",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-608611:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-608611",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	                "LowerDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-608611",
	                "Source": "/var/lib/docker/volumes/ha-608611/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-608611",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-608611",
	                "name.minikube.sigs.k8s.io": "ha-608611",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "SandboxKey": "",
	            "Ports": {},
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-608611": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d41ad8abecfe5e57fea462a2d7f6665aa3879de8bfc3fe0269f712186c14e257",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-608611",
	                        "92fc23109156"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611: exit status 7 (64.575428ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 7 (may be ok)
helpers_test.go:249: "ha-608611" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (1.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (368.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1009 19:05:34.609913   14880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 80 (6m7.04094222s)

                                                
                                                
-- stdout --
	* [ha-608611] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-608611" primary control-plane node in "ha-608611" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:02:02.366634   86934 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:02:02.366900   86934 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:02:02.366909   86934 out.go:374] Setting ErrFile to fd 2...
	I1009 19:02:02.366914   86934 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:02:02.367183   86934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 19:02:02.367673   86934 out.go:368] Setting JSON to false
	I1009 19:02:02.368576   86934 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6270,"bootTime":1760030252,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:02:02.368665   86934 start.go:141] virtualization: kvm guest
	I1009 19:02:02.370893   86934 out.go:179] * [ha-608611] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:02:02.372496   86934 notify.go:220] Checking for updates...
	I1009 19:02:02.372569   86934 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:02:02.374010   86934 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:02:02.375862   86934 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 19:02:02.377311   86934 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 19:02:02.378757   86934 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:02:02.380255   86934 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:02:02.382046   86934 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:02:02.382523   86934 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 19:02:02.405566   86934 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:02:02.405698   86934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:02:02.460511   86934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:02:02.449781611 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:02:02.460617   86934 docker.go:318] overlay module found
	I1009 19:02:02.467934   86934 out.go:179] * Using the docker driver based on existing profile
	I1009 19:02:02.472893   86934 start.go:305] selected driver: docker
	I1009 19:02:02.472930   86934 start.go:925] validating driver "docker" against &{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:02:02.473021   86934 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:02:02.473177   86934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:02:02.530403   86934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:02:02.520535313 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:02:02.530972   86934 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:02:02.530995   86934 cni.go:84] Creating CNI manager for ""
	I1009 19:02:02.531058   86934 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:02:02.531099   86934 start.go:349] cluster config:
	{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1009 19:02:02.536297   86934 out.go:179] * Starting "ha-608611" primary control-plane node in "ha-608611" cluster
	I1009 19:02:02.537921   86934 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 19:02:02.539315   86934 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:02:02.540530   86934 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:02:02.540558   86934 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:02:02.540566   86934 cache.go:64] Caching tarball of preloaded images
	I1009 19:02:02.540649   86934 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:02:02.540659   86934 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:02:02.540644   86934 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:02:02.540747   86934 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 19:02:02.560713   86934 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:02:02.560736   86934 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:02:02.560755   86934 cache.go:242] Successfully downloaded all kic artifacts
	I1009 19:02:02.560776   86934 start.go:360] acquireMachinesLock for ha-608611: {Name:mk7579977ab708dc80cadd5f1683dbd9d0a08d4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:02:02.560826   86934 start.go:364] duration metric: took 34.956µs to acquireMachinesLock for "ha-608611"
	I1009 19:02:02.560843   86934 start.go:96] Skipping create...Using existing machine configuration
	I1009 19:02:02.560848   86934 fix.go:54] fixHost starting: 
	I1009 19:02:02.561074   86934 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 19:02:02.578279   86934 fix.go:112] recreateIfNeeded on ha-608611: state=Stopped err=<nil>
	W1009 19:02:02.578318   86934 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 19:02:02.580033   86934 out.go:252] * Restarting existing docker container for "ha-608611" ...
	I1009 19:02:02.580095   86934 cli_runner.go:164] Run: docker start ha-608611
	I1009 19:02:02.818090   86934 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 19:02:02.837398   86934 kic.go:430] container "ha-608611" state is running.
	I1009 19:02:02.837716   86934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 19:02:02.857081   86934 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 19:02:02.857332   86934 machine.go:93] provisionDockerMachine start ...
	I1009 19:02:02.857395   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:02.875516   86934 main.go:141] libmachine: Using SSH client type: native
	I1009 19:02:02.875763   86934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:02:02.875778   86934 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:02:02.876346   86934 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39628->127.0.0.1:32793: read: connection reset by peer
	I1009 19:02:06.023115   86934 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 19:02:06.023157   86934 ubuntu.go:182] provisioning hostname "ha-608611"
	I1009 19:02:06.023213   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:06.041188   86934 main.go:141] libmachine: Using SSH client type: native
	I1009 19:02:06.041419   86934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:02:06.041437   86934 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-608611 && echo "ha-608611" | sudo tee /etc/hostname
	I1009 19:02:06.195947   86934 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 19:02:06.196039   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:06.214427   86934 main.go:141] libmachine: Using SSH client type: native
	I1009 19:02:06.214707   86934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:02:06.214726   86934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-608611' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-608611/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-608611' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:02:06.359913   86934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:02:06.359938   86934 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 19:02:06.359976   86934 ubuntu.go:190] setting up certificates
	I1009 19:02:06.359987   86934 provision.go:84] configureAuth start
	I1009 19:02:06.360055   86934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 19:02:06.377565   86934 provision.go:143] copyHostCerts
	I1009 19:02:06.377598   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 19:02:06.377621   86934 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 19:02:06.377632   86934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 19:02:06.377706   86934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 19:02:06.377792   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 19:02:06.377809   86934 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 19:02:06.377815   86934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 19:02:06.377841   86934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 19:02:06.377885   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 19:02:06.377901   86934 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 19:02:06.377907   86934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 19:02:06.377930   86934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 19:02:06.377978   86934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.ha-608611 san=[127.0.0.1 192.168.49.2 ha-608611 localhost minikube]
	I1009 19:02:06.551568   86934 provision.go:177] copyRemoteCerts
	I1009 19:02:06.551627   86934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:02:06.551664   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:06.569563   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:06.671559   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:02:06.671624   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 19:02:06.689362   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:02:06.689417   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:02:06.706820   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:02:06.706884   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:02:06.723659   86934 provision.go:87] duration metric: took 363.656182ms to configureAuth
	I1009 19:02:06.723684   86934 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:02:06.723837   86934 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:02:06.723932   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:06.741523   86934 main.go:141] libmachine: Using SSH client type: native
	I1009 19:02:06.741719   86934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:02:06.741733   86934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:02:06.997259   86934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:02:06.997278   86934 machine.go:96] duration metric: took 4.139930505s to provisionDockerMachine
	I1009 19:02:06.997295   86934 start.go:293] postStartSetup for "ha-608611" (driver="docker")
	I1009 19:02:06.997303   86934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:02:06.997364   86934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:02:06.997436   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:07.015165   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:07.117424   86934 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:02:07.121129   86934 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:02:07.121172   86934 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:02:07.121187   86934 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 19:02:07.121231   86934 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 19:02:07.121302   86934 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 19:02:07.121313   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /etc/ssl/certs/148802.pem
	I1009 19:02:07.121398   86934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:02:07.128962   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 19:02:07.146444   86934 start.go:296] duration metric: took 149.135002ms for postStartSetup
	I1009 19:02:07.146528   86934 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:02:07.146561   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:07.164604   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:07.263216   86934 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:02:07.267755   86934 fix.go:56] duration metric: took 4.706900009s for fixHost
	I1009 19:02:07.267794   86934 start.go:83] releasing machines lock for "ha-608611", held for 4.706943222s
	I1009 19:02:07.267857   86934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 19:02:07.284443   86934 ssh_runner.go:195] Run: cat /version.json
	I1009 19:02:07.284488   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:07.284518   86934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:02:07.284564   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:07.302426   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:07.302797   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:07.452812   86934 ssh_runner.go:195] Run: systemctl --version
	I1009 19:02:07.459227   86934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:02:07.492322   86934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:02:07.496837   86934 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:02:07.496893   86934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:02:07.504414   86934 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:02:07.504435   86934 start.go:495] detecting cgroup driver to use...
	I1009 19:02:07.504461   86934 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:02:07.504497   86934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:02:07.518639   86934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:02:07.530028   86934 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:02:07.530080   86934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:02:07.543210   86934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:02:07.554574   86934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:02:07.631689   86934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:02:07.710043   86934 docker.go:234] disabling docker service ...
	I1009 19:02:07.710103   86934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:02:07.723929   86934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:02:07.736312   86934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:02:07.813951   86934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:02:07.891501   86934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:02:07.903630   86934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:02:07.917404   86934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:02:07.917468   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.926188   86934 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:02:07.926260   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.935124   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.943686   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.952342   86934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:02:07.960386   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.969265   86934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.977652   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.986892   86934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:02:07.994317   86934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:02:08.001853   86934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:02:08.079819   86934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:02:08.184066   86934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:02:08.184131   86934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:02:08.188032   86934 start.go:563] Will wait 60s for crictl version
	I1009 19:02:08.188080   86934 ssh_runner.go:195] Run: which crictl
	I1009 19:02:08.191568   86934 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:02:08.215064   86934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:02:08.215130   86934 ssh_runner.go:195] Run: crio --version
	I1009 19:02:08.242668   86934 ssh_runner.go:195] Run: crio --version
	I1009 19:02:08.272310   86934 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:02:08.273867   86934 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:02:08.291028   86934 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:02:08.295020   86934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:02:08.304927   86934 kubeadm.go:883] updating cluster {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:02:08.305037   86934 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:02:08.305076   86934 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:02:08.334586   86934 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:02:08.334605   86934 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:02:08.334646   86934 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:02:08.359864   86934 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:02:08.359884   86934 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:02:08.359891   86934 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:02:08.359982   86934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-608611 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:02:08.360041   86934 ssh_runner.go:195] Run: crio config
	I1009 19:02:08.403513   86934 cni.go:84] Creating CNI manager for ""
	I1009 19:02:08.403536   86934 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:02:08.403553   86934 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:02:08.403581   86934 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-608611 NodeName:ha-608611 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:02:08.403758   86934 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-608611"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:02:08.403826   86934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:02:08.411830   86934 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 19:02:08.411894   86934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:02:08.419468   86934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:02:08.432379   86934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:02:08.445216   86934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 19:02:08.457891   86934 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:02:08.461609   86934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:02:08.471627   86934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:02:08.548747   86934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:02:08.570439   86934 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611 for IP: 192.168.49.2
	I1009 19:02:08.570462   86934 certs.go:195] generating shared ca certs ...
	I1009 19:02:08.570494   86934 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:02:08.570644   86934 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 19:02:08.570699   86934 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 19:02:08.570711   86934 certs.go:257] generating profile certs ...
	I1009 19:02:08.570809   86934 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key
	I1009 19:02:08.570886   86934 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.71ac3d0a
	I1009 19:02:08.570937   86934 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key
	I1009 19:02:08.570950   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:02:08.570974   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:02:08.570990   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:02:08.571008   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:02:08.571026   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:02:08.571045   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:02:08.571062   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:02:08.571080   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:02:08.571169   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 19:02:08.571210   86934 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 19:02:08.571224   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:02:08.571259   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 19:02:08.571305   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:02:08.571336   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 19:02:08.571392   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 19:02:08.571429   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /usr/share/ca-certificates/148802.pem
	I1009 19:02:08.571452   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:02:08.571470   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem -> /usr/share/ca-certificates/14880.pem
	I1009 19:02:08.572252   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:02:08.590519   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:02:08.608788   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:02:08.628771   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:02:08.652296   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1009 19:02:08.669442   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 19:02:08.686413   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:02:08.702970   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:02:08.719872   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 19:02:08.736350   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:02:08.753020   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 19:02:08.770756   86934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:02:08.782846   86934 ssh_runner.go:195] Run: openssl version
	I1009 19:02:08.788680   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:02:08.796773   86934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:02:08.800287   86934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:02:08.800342   86934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:02:08.834331   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:02:08.842576   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 19:02:08.850707   86934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 19:02:08.854375   86934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 19:02:08.854417   86934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 19:02:08.888132   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 19:02:08.896190   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 19:02:08.904560   86934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 19:02:08.908107   86934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 19:02:08.908167   86934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 19:02:08.941616   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:02:08.949683   86934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:02:08.953888   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:02:08.988843   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:02:09.022384   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:02:09.055785   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:02:09.100654   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:02:09.138816   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:02:09.175373   86934 kubeadm.go:400] StartCluster: {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:02:09.175553   86934 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:02:09.175626   86934 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:02:09.203282   86934 cri.go:89] found id: ""
	I1009 19:02:09.203337   86934 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:02:09.211170   86934 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:02:09.211189   86934 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:02:09.211233   86934 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:02:09.218525   86934 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:02:09.218879   86934 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 19:02:09.218998   86934 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-11374/kubeconfig needs updating (will repair): [kubeconfig missing "ha-608611" cluster setting kubeconfig missing "ha-608611" context setting]
	I1009 19:02:09.219307   86934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/kubeconfig: {Name:mke7bf8fc0811179129dfd61e3a963860adf8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:02:09.219795   86934 kapi.go:59] client config for ha-608611: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:02:09.220220   86934 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:02:09.220236   86934 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 19:02:09.220244   86934 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 19:02:09.220251   86934 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:02:09.220258   86934 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 19:02:09.220304   86934 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 19:02:09.220587   86934 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:02:09.228184   86934 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 19:02:09.228212   86934 kubeadm.go:601] duration metric: took 17.018594ms to restartPrimaryControlPlane
	I1009 19:02:09.228221   86934 kubeadm.go:402] duration metric: took 52.859442ms to StartCluster
	I1009 19:02:09.228235   86934 settings.go:142] acquiring lock: {Name:mke1fc24bd3c282bdce5b5999d4611ed242ead0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:02:09.228289   86934 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 19:02:09.228747   86934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/kubeconfig: {Name:mke7bf8fc0811179129dfd61e3a963860adf8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:02:09.228944   86934 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:02:09.229006   86934 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:02:09.229112   86934 addons.go:69] Setting storage-provisioner=true in profile "ha-608611"
	I1009 19:02:09.229129   86934 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:02:09.229158   86934 addons.go:69] Setting default-storageclass=true in profile "ha-608611"
	I1009 19:02:09.229194   86934 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-608611"
	I1009 19:02:09.229132   86934 addons.go:238] Setting addon storage-provisioner=true in "ha-608611"
	I1009 19:02:09.229294   86934 host.go:66] Checking if "ha-608611" exists ...
	I1009 19:02:09.229535   86934 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 19:02:09.229746   86934 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 19:02:09.232398   86934 out.go:179] * Verifying Kubernetes components...
	I1009 19:02:09.234182   86934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:02:09.249828   86934 kapi.go:59] client config for ha-608611: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:02:09.250212   86934 addons.go:238] Setting addon default-storageclass=true in "ha-608611"
	I1009 19:02:09.250254   86934 host.go:66] Checking if "ha-608611" exists ...
	I1009 19:02:09.250729   86934 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 19:02:09.253666   86934 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:02:09.255198   86934 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:02:09.255220   86934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:02:09.255295   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:09.279913   86934 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:02:09.279935   86934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:02:09.279997   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:09.280244   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:09.298795   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:09.340817   86934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:02:09.353728   86934 node_ready.go:35] waiting up to 6m0s for node "ha-608611" to be "Ready" ...
	I1009 19:02:09.392883   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:02:09.410568   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:09.451098   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.451133   86934 retry.go:31] will retry after 367.251438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:09.467582   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.467614   86934 retry.go:31] will retry after 202.583149ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.671071   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:09.728118   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.728165   86934 retry.go:31] will retry after 532.603205ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.819359   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:09.870710   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.870743   86934 retry.go:31] will retry after 279.776339ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.151303   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:10.203393   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.203423   86934 retry.go:31] will retry after 347.914412ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.261624   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:10.312099   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.312161   86934 retry.go:31] will retry after 754.410355ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.551883   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:10.604202   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.604236   86934 retry.go:31] will retry after 610.586718ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:11.067261   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:11.118580   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:11.118609   86934 retry.go:31] will retry after 814.916965ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:11.215892   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:11.267928   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:11.267972   86934 retry.go:31] will retry after 1.45438082s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:11.354562   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:11.934655   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:11.986484   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:11.986513   86934 retry.go:31] will retry after 1.124124769s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:12.723181   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:12.774656   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:12.774689   86934 retry.go:31] will retry after 1.232500279s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:13.111665   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:13.165517   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:13.165552   86934 retry.go:31] will retry after 2.16641371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:13.355245   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:14.007705   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:14.059964   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:14.059992   86934 retry.go:31] will retry after 3.058954256s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:15.332271   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:15.386449   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:15.386473   86934 retry.go:31] will retry after 3.386344457s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:15.854462   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:17.120044   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:17.172191   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:17.172228   86934 retry.go:31] will retry after 5.108857909s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:17.855169   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:18.773686   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:18.825043   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:18.825075   86934 retry.go:31] will retry after 4.328736912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:20.354784   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:22.282235   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:22.336593   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:22.336620   86934 retry.go:31] will retry after 8.469274029s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:22.355192   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:23.154808   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:23.207154   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:23.207192   86934 retry.go:31] will retry after 9.59352501s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:24.854514   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:27.355255   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:29.854449   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:30.806123   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:30.858604   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:30.858637   86934 retry.go:31] will retry after 13.297733582s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:32.354331   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:32.800848   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:32.854427   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:32.854451   86934 retry.go:31] will retry after 8.328873063s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:34.354417   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:36.354493   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:38.354571   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:40.354643   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:41.184043   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:41.237661   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:41.237694   86934 retry.go:31] will retry after 10.702907746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:42.854628   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:44.156959   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:44.208755   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:44.208790   86934 retry.go:31] will retry after 18.065677643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:45.354394   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:47.854450   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:49.854575   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:51.941580   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:51.995763   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:51.995796   86934 retry.go:31] will retry after 22.859549113s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:52.354574   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:54.854455   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:57.354280   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:59.354606   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:01.854286   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:03:02.274776   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:03:02.329455   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:03:02.329481   86934 retry.go:31] will retry after 18.531804756s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:03:03.854398   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:05.855306   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:08.354544   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:10.354642   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:12.854650   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:03:14.855487   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:03:14.910832   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:03:14.910866   86934 retry.go:31] will retry after 23.992226966s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:03:15.354856   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:17.854777   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:19.855067   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:03:20.862242   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:03:20.916094   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:03:20.916120   86934 retry.go:31] will retry after 48.100773528s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:03:22.355103   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:24.355298   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:26.855213   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:29.354698   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:31.854367   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:33.854849   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:36.354516   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:38.354590   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:03:38.903767   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:03:38.956838   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:03:38.956956   86934 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1009 19:03:40.854352   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:42.854763   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:44.855321   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:47.354581   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:49.355061   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:51.854592   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:53.855020   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:56.354334   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:58.354436   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:00.355133   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:02.355211   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:04.854653   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:06.854735   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:04:09.017880   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:04:09.070622   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:09.070759   86934 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 19:04:09.073837   86934 out.go:179] * Enabled addons: 
	I1009 19:04:09.075208   86934 addons.go:514] duration metric: took 1m59.846203175s for enable addons: enabled=[]
	W1009 19:04:09.354738   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:11.854382   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:13.854761   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:15.855263   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:18.354436   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:20.354680   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:22.854757   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:25.354358   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:27.354618   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:29.355201   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:31.854584   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:33.855216   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:36.354515   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:38.355047   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:40.854574   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:42.854919   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:45.354432   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:47.354739   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:49.854455   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:51.854700   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:54.354542   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:56.354729   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:58.355340   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:00.854996   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:03.354655   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:05.354894   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:07.854625   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:09.854988   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:12.354612   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:14.355191   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:16.854672   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:18.855119   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:21.354471   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:23.355067   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:25.854706   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:28.354363   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:30.354952   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:32.854719   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:34.855304   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:37.354583   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:39.355134   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:41.854603   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:44.354384   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:46.354675   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:48.355094   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:50.854601   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:52.854769   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:55.354452   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:57.354754   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:59.854434   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:01.854660   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:03.855216   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:06.354552   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:08.354978   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:10.854742   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:13.354448   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:15.854379   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:17.854464   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:19.854680   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:22.354465   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:24.354554   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:26.854391   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:28.854550   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:30.854630   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:33.354581   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:35.354615   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:37.854978   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:39.855076   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:41.855108   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:43.855311   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:46.355236   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:48.355325   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:50.854629   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:52.854776   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:55.354563   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:57.854716   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:59.854942   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:02.354877   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:04.355253   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:06.854673   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:08.855261   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:11.354618   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:13.355044   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:15.854451   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:17.854909   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:20.354494   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:22.354776   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:24.854505   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:26.854756   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:29.354667   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:31.355071   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:33.854718   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:35.855122   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:38.354669   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:40.355263   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:42.854610   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:44.855295   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:47.354752   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:49.854638   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:51.855251   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:54.354792   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:56.854535   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:58.855239   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:08:01.354815   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:08:03.854572   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:08:05.854724   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:08:08.354483   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:08:09.353866   86934 node_ready.go:38] duration metric: took 6m0.000084484s for node "ha-608611" to be "Ready" ...
	I1009 19:08:09.356453   86934 out.go:203] 
	W1009 19:08:09.357971   86934 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 19:08:09.357991   86934 out.go:285] * 
	* 
	W1009 19:08:09.359976   86934 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:08:09.361285   86934 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-amd64 -p ha-608611 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-608611
helpers_test.go:243: (dbg) docker inspect ha-608611:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	        "Created": "2025-10-09T18:44:43.71277862Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 87136,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:02:02.606525681Z",
	            "FinishedAt": "2025-10-09T19:02:01.288438646Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hostname",
	        "HostsPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hosts",
	        "LogPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c-json.log",
	        "Name": "/ha-608611",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-608611:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-608611",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	                "LowerDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-608611",
	                "Source": "/var/lib/docker/volumes/ha-608611/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-608611",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-608611",
	                "name.minikube.sigs.k8s.io": "ha-608611",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "34e364d29b995e9f397e4ff58ac14a48a876f810f7b517d883d6edcdbb1bf188",
	            "SandboxKey": "/var/run/docker/netns/34e364d29b99",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32793"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32794"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32795"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32796"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-608611": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:4f:68:d2:b9:a8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d41ad8abecfe5e57fea462a2d7f6665aa3879de8bfc3fe0269f712186c14e257",
	                    "EndpointID": "9607201385fb50d883c3f937998cbc9542b588f50f9c40d6bdf9c41bc6baf758",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-608611",
	                        "92fc23109156"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611: exit status 2 (306.342625ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node    │ ha-608611 node add --alsologtostderr -v 5                                                    │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node    │ ha-608611 node stop m02 --alsologtostderr -v 5                                               │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node    │ ha-608611 node start m02 --alsologtostderr -v 5                                              │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node    │ ha-608611 node list --alsologtostderr -v 5                                                   │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:55 UTC │                     │
	│ stop    │ ha-608611 stop --alsologtostderr -v 5                                                        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:55 UTC │ 09 Oct 25 18:55 UTC │
	│ start   │ ha-608611 start --wait true --alsologtostderr -v 5                                           │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:55 UTC │                     │
	│ node    │ ha-608611 node list --alsologtostderr -v 5                                                   │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │                     │
	│ node    │ ha-608611 node delete m03 --alsologtostderr -v 5                                             │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │                     │
	│ stop    │ ha-608611 stop --alsologtostderr -v 5                                                        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 19:02 UTC │ 09 Oct 25 19:02 UTC │
	│ start   │ ha-608611 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 19:02 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:02:02
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:02:02.366634   86934 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:02:02.366900   86934 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:02:02.366909   86934 out.go:374] Setting ErrFile to fd 2...
	I1009 19:02:02.366914   86934 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:02:02.367183   86934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 19:02:02.367673   86934 out.go:368] Setting JSON to false
	I1009 19:02:02.368576   86934 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6270,"bootTime":1760030252,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:02:02.368665   86934 start.go:141] virtualization: kvm guest
	I1009 19:02:02.370893   86934 out.go:179] * [ha-608611] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:02:02.372496   86934 notify.go:220] Checking for updates...
	I1009 19:02:02.372569   86934 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:02:02.374010   86934 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:02:02.375862   86934 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 19:02:02.377311   86934 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 19:02:02.378757   86934 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:02:02.380255   86934 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:02:02.382046   86934 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:02:02.382523   86934 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 19:02:02.405566   86934 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:02:02.405698   86934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:02:02.460511   86934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:02:02.449781611 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:02:02.460617   86934 docker.go:318] overlay module found
	I1009 19:02:02.467934   86934 out.go:179] * Using the docker driver based on existing profile
	I1009 19:02:02.472893   86934 start.go:305] selected driver: docker
	I1009 19:02:02.472930   86934 start.go:925] validating driver "docker" against &{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:02:02.473021   86934 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:02:02.473177   86934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:02:02.530403   86934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:02:02.520535313 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:02:02.530972   86934 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:02:02.530995   86934 cni.go:84] Creating CNI manager for ""
	I1009 19:02:02.531058   86934 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:02:02.531099   86934 start.go:349] cluster config:
	{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1009 19:02:02.536297   86934 out.go:179] * Starting "ha-608611" primary control-plane node in "ha-608611" cluster
	I1009 19:02:02.537921   86934 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 19:02:02.539315   86934 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:02:02.540530   86934 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:02:02.540558   86934 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:02:02.540566   86934 cache.go:64] Caching tarball of preloaded images
	I1009 19:02:02.540649   86934 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:02:02.540659   86934 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:02:02.540644   86934 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:02:02.540747   86934 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 19:02:02.560713   86934 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:02:02.560736   86934 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:02:02.560755   86934 cache.go:242] Successfully downloaded all kic artifacts
	I1009 19:02:02.560776   86934 start.go:360] acquireMachinesLock for ha-608611: {Name:mk7579977ab708dc80cadd5f1683dbd9d0a08d4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:02:02.560826   86934 start.go:364] duration metric: took 34.956µs to acquireMachinesLock for "ha-608611"
	I1009 19:02:02.560843   86934 start.go:96] Skipping create...Using existing machine configuration
	I1009 19:02:02.560848   86934 fix.go:54] fixHost starting: 
	I1009 19:02:02.561074   86934 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 19:02:02.578279   86934 fix.go:112] recreateIfNeeded on ha-608611: state=Stopped err=<nil>
	W1009 19:02:02.578318   86934 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 19:02:02.580033   86934 out.go:252] * Restarting existing docker container for "ha-608611" ...
	I1009 19:02:02.580095   86934 cli_runner.go:164] Run: docker start ha-608611
	I1009 19:02:02.818090   86934 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 19:02:02.837398   86934 kic.go:430] container "ha-608611" state is running.
	I1009 19:02:02.837716   86934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 19:02:02.857081   86934 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 19:02:02.857332   86934 machine.go:93] provisionDockerMachine start ...
	I1009 19:02:02.857395   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:02.875516   86934 main.go:141] libmachine: Using SSH client type: native
	I1009 19:02:02.875763   86934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:02:02.875778   86934 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:02:02.876346   86934 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39628->127.0.0.1:32793: read: connection reset by peer
	I1009 19:02:06.023115   86934 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 19:02:06.023157   86934 ubuntu.go:182] provisioning hostname "ha-608611"
	I1009 19:02:06.023213   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:06.041188   86934 main.go:141] libmachine: Using SSH client type: native
	I1009 19:02:06.041419   86934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:02:06.041437   86934 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-608611 && echo "ha-608611" | sudo tee /etc/hostname
	I1009 19:02:06.195947   86934 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 19:02:06.196039   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:06.214427   86934 main.go:141] libmachine: Using SSH client type: native
	I1009 19:02:06.214707   86934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:02:06.214726   86934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-608611' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-608611/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-608611' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:02:06.359913   86934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:02:06.359938   86934 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 19:02:06.359976   86934 ubuntu.go:190] setting up certificates
	I1009 19:02:06.359987   86934 provision.go:84] configureAuth start
	I1009 19:02:06.360055   86934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 19:02:06.377565   86934 provision.go:143] copyHostCerts
	I1009 19:02:06.377598   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 19:02:06.377621   86934 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 19:02:06.377632   86934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 19:02:06.377706   86934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 19:02:06.377792   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 19:02:06.377809   86934 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 19:02:06.377815   86934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 19:02:06.377841   86934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 19:02:06.377885   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 19:02:06.377901   86934 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 19:02:06.377907   86934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 19:02:06.377930   86934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 19:02:06.377978   86934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.ha-608611 san=[127.0.0.1 192.168.49.2 ha-608611 localhost minikube]
	I1009 19:02:06.551568   86934 provision.go:177] copyRemoteCerts
	I1009 19:02:06.551627   86934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:02:06.551664   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:06.569563   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:06.671559   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:02:06.671624   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 19:02:06.689362   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:02:06.689417   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:02:06.706820   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:02:06.706884   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:02:06.723659   86934 provision.go:87] duration metric: took 363.656182ms to configureAuth
	I1009 19:02:06.723684   86934 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:02:06.723837   86934 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:02:06.723932   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:06.741523   86934 main.go:141] libmachine: Using SSH client type: native
	I1009 19:02:06.741719   86934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:02:06.741733   86934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:02:06.997259   86934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:02:06.997278   86934 machine.go:96] duration metric: took 4.139930505s to provisionDockerMachine
	I1009 19:02:06.997295   86934 start.go:293] postStartSetup for "ha-608611" (driver="docker")
	I1009 19:02:06.997303   86934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:02:06.997364   86934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:02:06.997436   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:07.015165   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:07.117424   86934 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:02:07.121129   86934 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:02:07.121172   86934 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:02:07.121187   86934 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 19:02:07.121231   86934 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 19:02:07.121302   86934 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 19:02:07.121313   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /etc/ssl/certs/148802.pem
	I1009 19:02:07.121398   86934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:02:07.128962   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 19:02:07.146444   86934 start.go:296] duration metric: took 149.135002ms for postStartSetup
	I1009 19:02:07.146528   86934 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:02:07.146561   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:07.164604   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:07.263216   86934 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:02:07.267755   86934 fix.go:56] duration metric: took 4.706900009s for fixHost
	I1009 19:02:07.267794   86934 start.go:83] releasing machines lock for "ha-608611", held for 4.706943222s
	I1009 19:02:07.267857   86934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 19:02:07.284443   86934 ssh_runner.go:195] Run: cat /version.json
	I1009 19:02:07.284488   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:07.284518   86934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:02:07.284564   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:07.302426   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:07.302797   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:07.452812   86934 ssh_runner.go:195] Run: systemctl --version
	I1009 19:02:07.459227   86934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:02:07.492322   86934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:02:07.496837   86934 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:02:07.496893   86934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:02:07.504414   86934 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:02:07.504435   86934 start.go:495] detecting cgroup driver to use...
	I1009 19:02:07.504461   86934 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:02:07.504497   86934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:02:07.518639   86934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:02:07.530028   86934 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:02:07.530080   86934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:02:07.543210   86934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:02:07.554574   86934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:02:07.631689   86934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:02:07.710043   86934 docker.go:234] disabling docker service ...
	I1009 19:02:07.710103   86934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:02:07.723929   86934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:02:07.736312   86934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:02:07.813951   86934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:02:07.891501   86934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:02:07.903630   86934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:02:07.917404   86934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:02:07.917468   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.926188   86934 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:02:07.926260   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.935124   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.943686   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.952342   86934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:02:07.960386   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.969265   86934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.977652   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.986892   86934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:02:07.994317   86934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:02:08.001853   86934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:02:08.079819   86934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:02:08.184066   86934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:02:08.184131   86934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:02:08.188032   86934 start.go:563] Will wait 60s for crictl version
	I1009 19:02:08.188080   86934 ssh_runner.go:195] Run: which crictl
	I1009 19:02:08.191568   86934 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:02:08.215064   86934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:02:08.215130   86934 ssh_runner.go:195] Run: crio --version
	I1009 19:02:08.242668   86934 ssh_runner.go:195] Run: crio --version
	I1009 19:02:08.272310   86934 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:02:08.273867   86934 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:02:08.291028   86934 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:02:08.295020   86934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:02:08.304927   86934 kubeadm.go:883] updating cluster {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:02:08.305037   86934 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:02:08.305076   86934 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:02:08.334586   86934 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:02:08.334605   86934 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:02:08.334646   86934 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:02:08.359864   86934 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:02:08.359884   86934 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:02:08.359891   86934 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:02:08.359982   86934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-608611 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:02:08.360041   86934 ssh_runner.go:195] Run: crio config
	I1009 19:02:08.403513   86934 cni.go:84] Creating CNI manager for ""
	I1009 19:02:08.403536   86934 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:02:08.403553   86934 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:02:08.403581   86934 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-608611 NodeName:ha-608611 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:02:08.403758   86934 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-608611"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:02:08.403826   86934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:02:08.411830   86934 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 19:02:08.411894   86934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:02:08.419468   86934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:02:08.432379   86934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:02:08.445216   86934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 19:02:08.457891   86934 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:02:08.461609   86934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:02:08.471627   86934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:02:08.548747   86934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:02:08.570439   86934 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611 for IP: 192.168.49.2
	I1009 19:02:08.570462   86934 certs.go:195] generating shared ca certs ...
	I1009 19:02:08.570494   86934 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:02:08.570644   86934 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 19:02:08.570699   86934 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 19:02:08.570711   86934 certs.go:257] generating profile certs ...
	I1009 19:02:08.570809   86934 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key
	I1009 19:02:08.570886   86934 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.71ac3d0a
	I1009 19:02:08.570937   86934 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key
	I1009 19:02:08.570950   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:02:08.570974   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:02:08.570990   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:02:08.571008   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:02:08.571026   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:02:08.571045   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:02:08.571062   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:02:08.571080   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:02:08.571169   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 19:02:08.571210   86934 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 19:02:08.571224   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:02:08.571259   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 19:02:08.571305   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:02:08.571336   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 19:02:08.571392   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 19:02:08.571429   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /usr/share/ca-certificates/148802.pem
	I1009 19:02:08.571452   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:02:08.571470   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem -> /usr/share/ca-certificates/14880.pem
	I1009 19:02:08.572252   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:02:08.590519   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:02:08.608788   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:02:08.628771   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:02:08.652296   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1009 19:02:08.669442   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 19:02:08.686413   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:02:08.702970   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:02:08.719872   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 19:02:08.736350   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:02:08.753020   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 19:02:08.770756   86934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:02:08.782846   86934 ssh_runner.go:195] Run: openssl version
	I1009 19:02:08.788680   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:02:08.796773   86934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:02:08.800287   86934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:02:08.800342   86934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:02:08.834331   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:02:08.842576   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 19:02:08.850707   86934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 19:02:08.854375   86934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 19:02:08.854417   86934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 19:02:08.888132   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 19:02:08.896190   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 19:02:08.904560   86934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 19:02:08.908107   86934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 19:02:08.908167   86934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 19:02:08.941616   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:02:08.949683   86934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:02:08.953888   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:02:08.988843   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:02:09.022384   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:02:09.055785   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:02:09.100654   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:02:09.138816   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:02:09.175373   86934 kubeadm.go:400] StartCluster: {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:02:09.175553   86934 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:02:09.175626   86934 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:02:09.203282   86934 cri.go:89] found id: ""
	I1009 19:02:09.203337   86934 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:02:09.211170   86934 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:02:09.211189   86934 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:02:09.211233   86934 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:02:09.218525   86934 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:02:09.218879   86934 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 19:02:09.218998   86934 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-11374/kubeconfig needs updating (will repair): [kubeconfig missing "ha-608611" cluster setting kubeconfig missing "ha-608611" context setting]
	I1009 19:02:09.219307   86934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/kubeconfig: {Name:mke7bf8fc0811179129dfd61e3a963860adf8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:02:09.219795   86934 kapi.go:59] client config for ha-608611: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:02:09.220220   86934 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:02:09.220236   86934 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 19:02:09.220244   86934 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 19:02:09.220251   86934 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:02:09.220258   86934 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 19:02:09.220304   86934 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 19:02:09.220587   86934 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:02:09.228184   86934 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 19:02:09.228212   86934 kubeadm.go:601] duration metric: took 17.018594ms to restartPrimaryControlPlane
	I1009 19:02:09.228221   86934 kubeadm.go:402] duration metric: took 52.859442ms to StartCluster
	I1009 19:02:09.228235   86934 settings.go:142] acquiring lock: {Name:mke1fc24bd3c282bdce5b5999d4611ed242ead0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:02:09.228289   86934 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 19:02:09.228747   86934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/kubeconfig: {Name:mke7bf8fc0811179129dfd61e3a963860adf8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:02:09.228944   86934 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:02:09.229006   86934 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:02:09.229112   86934 addons.go:69] Setting storage-provisioner=true in profile "ha-608611"
	I1009 19:02:09.229129   86934 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:02:09.229158   86934 addons.go:69] Setting default-storageclass=true in profile "ha-608611"
	I1009 19:02:09.229194   86934 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-608611"
	I1009 19:02:09.229132   86934 addons.go:238] Setting addon storage-provisioner=true in "ha-608611"
	I1009 19:02:09.229294   86934 host.go:66] Checking if "ha-608611" exists ...
	I1009 19:02:09.229535   86934 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 19:02:09.229746   86934 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 19:02:09.232398   86934 out.go:179] * Verifying Kubernetes components...
	I1009 19:02:09.234182   86934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:02:09.249828   86934 kapi.go:59] client config for ha-608611: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:02:09.250212   86934 addons.go:238] Setting addon default-storageclass=true in "ha-608611"
	I1009 19:02:09.250254   86934 host.go:66] Checking if "ha-608611" exists ...
	I1009 19:02:09.250729   86934 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 19:02:09.253666   86934 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:02:09.255198   86934 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:02:09.255220   86934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:02:09.255295   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:09.279913   86934 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:02:09.279935   86934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:02:09.279997   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:09.280244   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:09.298795   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:09.340817   86934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:02:09.353728   86934 node_ready.go:35] waiting up to 6m0s for node "ha-608611" to be "Ready" ...
	I1009 19:02:09.392883   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:02:09.410568   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:09.451098   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.451133   86934 retry.go:31] will retry after 367.251438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:09.467582   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.467614   86934 retry.go:31] will retry after 202.583149ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.671071   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:09.728118   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.728165   86934 retry.go:31] will retry after 532.603205ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.819359   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:09.870710   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.870743   86934 retry.go:31] will retry after 279.776339ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.151303   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:10.203393   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.203423   86934 retry.go:31] will retry after 347.914412ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.261624   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:10.312099   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.312161   86934 retry.go:31] will retry after 754.410355ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.551883   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:10.604202   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.604236   86934 retry.go:31] will retry after 610.586718ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:11.067261   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:11.118580   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:11.118609   86934 retry.go:31] will retry after 814.916965ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:11.215892   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:11.267928   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:11.267972   86934 retry.go:31] will retry after 1.45438082s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:11.354562   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:11.934655   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:11.986484   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:11.986513   86934 retry.go:31] will retry after 1.124124769s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:12.723181   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:12.774656   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:12.774689   86934 retry.go:31] will retry after 1.232500279s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:13.111665   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:13.165517   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:13.165552   86934 retry.go:31] will retry after 2.16641371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:13.355245   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:14.007705   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:14.059964   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:14.059992   86934 retry.go:31] will retry after 3.058954256s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:15.332271   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:15.386449   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:15.386473   86934 retry.go:31] will retry after 3.386344457s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:15.854462   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:17.120044   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:17.172191   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:17.172228   86934 retry.go:31] will retry after 5.108857909s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:17.855169   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:18.773686   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:18.825043   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:18.825075   86934 retry.go:31] will retry after 4.328736912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:20.354784   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:22.282235   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:22.336593   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:22.336620   86934 retry.go:31] will retry after 8.469274029s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:22.355192   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:23.154808   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:23.207154   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:23.207192   86934 retry.go:31] will retry after 9.59352501s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:24.854514   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:27.355255   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:29.854449   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:30.806123   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:30.858604   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:30.858637   86934 retry.go:31] will retry after 13.297733582s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:32.354331   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:32.800848   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:32.854427   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:32.854451   86934 retry.go:31] will retry after 8.328873063s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:34.354417   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:36.354493   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:38.354571   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:40.354643   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:41.184043   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:41.237661   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:41.237694   86934 retry.go:31] will retry after 10.702907746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:42.854628   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:44.156959   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:44.208755   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:44.208790   86934 retry.go:31] will retry after 18.065677643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:45.354394   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:47.854450   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:49.854575   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:51.941580   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:51.995763   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:51.995796   86934 retry.go:31] will retry after 22.859549113s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:52.354574   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:54.854455   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:57.354280   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:59.354606   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:01.854286   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:03:02.274776   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:03:02.329455   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:03:02.329481   86934 retry.go:31] will retry after 18.531804756s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:03:03.854398   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:05.855306   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:08.354544   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:10.354642   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:12.854650   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:03:14.855487   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:03:14.910832   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:03:14.910866   86934 retry.go:31] will retry after 23.992226966s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:03:15.354856   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:17.854777   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:19.855067   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:03:20.862242   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:03:20.916094   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:03:20.916120   86934 retry.go:31] will retry after 48.100773528s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:03:22.355103   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:24.355298   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:26.855213   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:29.354698   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:31.854367   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:33.854849   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:36.354516   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:38.354590   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:03:38.903767   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:03:38.956838   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:03:38.956956   86934 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1009 19:03:40.854352   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:42.854763   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:44.855321   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:47.354581   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:49.355061   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:51.854592   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:53.855020   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:56.354334   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:58.354436   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:00.355133   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:02.355211   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:04.854653   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:06.854735   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:04:09.017880   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:04:09.070622   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:09.070759   86934 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 19:04:09.073837   86934 out.go:179] * Enabled addons: 
	I1009 19:04:09.075208   86934 addons.go:514] duration metric: took 1m59.846203175s for enable addons: enabled=[]
	W1009 19:04:09.354738   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:11.854382   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:13.854761   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:15.855263   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:18.354436   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:20.354680   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:22.854757   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:25.354358   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:27.354618   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:29.355201   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:31.854584   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:33.855216   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:36.354515   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:38.355047   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:40.854574   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:42.854919   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:45.354432   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:47.354739   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:49.854455   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:51.854700   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:54.354542   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:56.354729   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:58.355340   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:00.854996   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:03.354655   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:05.354894   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:07.854625   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:09.854988   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:12.354612   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:14.355191   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:16.854672   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:18.855119   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:21.354471   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:23.355067   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:25.854706   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:28.354363   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:30.354952   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:32.854719   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:34.855304   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:37.354583   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:39.355134   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:41.854603   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:44.354384   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:46.354675   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:48.355094   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:50.854601   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:52.854769   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:55.354452   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:57.354754   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:59.854434   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:01.854660   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:03.855216   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:06.354552   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:08.354978   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:10.854742   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:13.354448   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:15.854379   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:17.854464   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:19.854680   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:22.354465   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:24.354554   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:26.854391   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:28.854550   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:30.854630   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:33.354581   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:35.354615   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:37.854978   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:39.855076   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:41.855108   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:43.855311   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:46.355236   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:48.355325   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:50.854629   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:52.854776   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:55.354563   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:57.854716   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:59.854942   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:02.354877   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:04.355253   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:06.854673   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:08.855261   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:11.354618   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:13.355044   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:15.854451   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:17.854909   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:20.354494   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:22.354776   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:24.854505   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:26.854756   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:29.354667   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:31.355071   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:33.854718   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:35.855122   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:38.354669   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:40.355263   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:42.854610   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:44.855295   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:47.354752   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:49.854638   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:51.855251   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:54.354792   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:56.854535   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:58.855239   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:08:01.354815   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:08:03.854572   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:08:05.854724   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:08:08.354483   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:08:09.353866   86934 node_ready.go:38] duration metric: took 6m0.000084484s for node "ha-608611" to be "Ready" ...
	I1009 19:08:09.356453   86934 out.go:203] 
	W1009 19:08:09.357971   86934 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 19:08:09.357991   86934 out.go:285] * 
	W1009 19:08:09.359976   86934 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:08:09.361285   86934 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:07:57 ha-608611 crio[522]: time="2025-10-09T19:07:57.679788999Z" level=info msg="createCtr: removing container e9be094c33ac4df76c3696cbae7eed0661da684bbb4f26c6e713b9484403f776" id=e41e1cc3-c0bb-4a60-b7f4-06e7e246ba5e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:07:57 ha-608611 crio[522]: time="2025-10-09T19:07:57.67982287Z" level=info msg="createCtr: deleting container e9be094c33ac4df76c3696cbae7eed0661da684bbb4f26c6e713b9484403f776 from storage" id=e41e1cc3-c0bb-4a60-b7f4-06e7e246ba5e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:07:57 ha-608611 crio[522]: time="2025-10-09T19:07:57.68211942Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-608611_kube-system_aa829d6ea417a48ecaa6f5cad3254d94_0" id=e41e1cc3-c0bb-4a60-b7f4-06e7e246ba5e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:01 ha-608611 crio[522]: time="2025-10-09T19:08:01.654264019Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=48e5b8e9-affe-437d-8bd6-04ad9994487b name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:08:01 ha-608611 crio[522]: time="2025-10-09T19:08:01.655269846Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=63df026f-9f32-495a-b29e-cc20587cec3c name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:08:01 ha-608611 crio[522]: time="2025-10-09T19:08:01.656123779Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-608611/kube-controller-manager" id=a486d5b6-77b4-47f1-b9e1-e9709eb61f2b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:01 ha-608611 crio[522]: time="2025-10-09T19:08:01.656478548Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:08:01 ha-608611 crio[522]: time="2025-10-09T19:08:01.660994495Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:08:01 ha-608611 crio[522]: time="2025-10-09T19:08:01.661446519Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:08:01 ha-608611 crio[522]: time="2025-10-09T19:08:01.67424761Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=a486d5b6-77b4-47f1-b9e1-e9709eb61f2b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:01 ha-608611 crio[522]: time="2025-10-09T19:08:01.675729295Z" level=info msg="createCtr: deleting container ID 21d147f8f514c771877205dedced65ecb7a1e22d55e73939ea6867699ea9d073 from idIndex" id=a486d5b6-77b4-47f1-b9e1-e9709eb61f2b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:01 ha-608611 crio[522]: time="2025-10-09T19:08:01.675771859Z" level=info msg="createCtr: removing container 21d147f8f514c771877205dedced65ecb7a1e22d55e73939ea6867699ea9d073" id=a486d5b6-77b4-47f1-b9e1-e9709eb61f2b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:01 ha-608611 crio[522]: time="2025-10-09T19:08:01.675803376Z" level=info msg="createCtr: deleting container 21d147f8f514c771877205dedced65ecb7a1e22d55e73939ea6867699ea9d073 from storage" id=a486d5b6-77b4-47f1-b9e1-e9709eb61f2b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:01 ha-608611 crio[522]: time="2025-10-09T19:08:01.677910763Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-608611_kube-system_cc9d45d79042caf53449ab6317965aad_0" id=a486d5b6-77b4-47f1-b9e1-e9709eb61f2b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:07 ha-608611 crio[522]: time="2025-10-09T19:08:07.65453339Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=23ecfe77-5fcd-4b2b-aeab-194061e38eca name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:08:07 ha-608611 crio[522]: time="2025-10-09T19:08:07.656228172Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=283b6b3d-3ffc-4e51-8dd8-a391b83e32a7 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:08:07 ha-608611 crio[522]: time="2025-10-09T19:08:07.657379299Z" level=info msg="Creating container: kube-system/etcd-ha-608611/etcd" id=4b0fd758-160a-4e22-b7d5-1e7a9919873d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:07 ha-608611 crio[522]: time="2025-10-09T19:08:07.657607876Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:08:07 ha-608611 crio[522]: time="2025-10-09T19:08:07.660987812Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:08:07 ha-608611 crio[522]: time="2025-10-09T19:08:07.661410632Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:08:07 ha-608611 crio[522]: time="2025-10-09T19:08:07.679585429Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=4b0fd758-160a-4e22-b7d5-1e7a9919873d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:07 ha-608611 crio[522]: time="2025-10-09T19:08:07.680952036Z" level=info msg="createCtr: deleting container ID 5f48c2696f05e3f84156e1ed3396575767d1c4b6f73313be59e732cda081faf2 from idIndex" id=4b0fd758-160a-4e22-b7d5-1e7a9919873d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:07 ha-608611 crio[522]: time="2025-10-09T19:08:07.680992477Z" level=info msg="createCtr: removing container 5f48c2696f05e3f84156e1ed3396575767d1c4b6f73313be59e732cda081faf2" id=4b0fd758-160a-4e22-b7d5-1e7a9919873d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:07 ha-608611 crio[522]: time="2025-10-09T19:08:07.681024255Z" level=info msg="createCtr: deleting container 5f48c2696f05e3f84156e1ed3396575767d1c4b6f73313be59e732cda081faf2 from storage" id=4b0fd758-160a-4e22-b7d5-1e7a9919873d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:07 ha-608611 crio[522]: time="2025-10-09T19:08:07.682961411Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-608611_kube-system_b479c8e1034fd1754049af8325a8c50b_0" id=4b0fd758-160a-4e22-b7d5-1e7a9919873d name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:08:10.287757    2005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:08:10.288261    2005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:08:10.289770    2005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:08:10.290178    2005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:08:10.291672    2005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:08:10 up  1:50,  0 user,  load average: 0.00, 0.14, 0.15
	Linux ha-608611 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:07:57 ha-608611 kubelet[671]:  > logger="UnhandledError"
	Oct 09 19:07:57 ha-608611 kubelet[671]: E1009 19:07:57.682630     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-608611" podUID="aa829d6ea417a48ecaa6f5cad3254d94"
	Oct 09 19:07:58 ha-608611 kubelet[671]: E1009 19:07:58.673120     671 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-608611\" not found"
	Oct 09 19:08:01 ha-608611 kubelet[671]: E1009 19:08:01.653781     671 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 19:08:01 ha-608611 kubelet[671]: E1009 19:08:01.678240     671 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:08:01 ha-608611 kubelet[671]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:08:01 ha-608611 kubelet[671]:  > podSandboxID="e6d213121dff8e12e33b6dfffb3c6dee8f92a52bbf3378d51bab179d2c3d906d"
	Oct 09 19:08:01 ha-608611 kubelet[671]: E1009 19:08:01.678340     671 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:08:01 ha-608611 kubelet[671]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-608611_kube-system(cc9d45d79042caf53449ab6317965aad): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:08:01 ha-608611 kubelet[671]:  > logger="UnhandledError"
	Oct 09 19:08:01 ha-608611 kubelet[671]: E1009 19:08:01.678373     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-608611" podUID="cc9d45d79042caf53449ab6317965aad"
	Oct 09 19:08:04 ha-608611 kubelet[671]: E1009 19:08:04.291382     671 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-608611?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:08:04 ha-608611 kubelet[671]: I1009 19:08:04.461336     671 kubelet_node_status.go:75] "Attempting to register node" node="ha-608611"
	Oct 09 19:08:04 ha-608611 kubelet[671]: E1009 19:08:04.461782     671 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-608611"
	Oct 09 19:08:06 ha-608611 kubelet[671]: E1009 19:08:06.648845     671 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-608611&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 09 19:08:07 ha-608611 kubelet[671]: E1009 19:08:07.517595     671 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-608611.186ce7e5d1ac7bf7  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-608611,UID:ha-608611,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-608611 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-608611,},FirstTimestamp:2025-10-09 19:02:08.646290423 +0000 UTC m=+0.074054203,LastTimestamp:2025-10-09 19:02:08.646290423 +0000 UTC m=+0.074054203,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-608611,}"
	Oct 09 19:08:07 ha-608611 kubelet[671]: E1009 19:08:07.653371     671 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 19:08:07 ha-608611 kubelet[671]: E1009 19:08:07.683320     671 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:08:07 ha-608611 kubelet[671]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:08:07 ha-608611 kubelet[671]:  > podSandboxID="d7b0b4143624f2e40fa8420bc4baa97f53997144043ca1197badd7726113b7b9"
	Oct 09 19:08:07 ha-608611 kubelet[671]: E1009 19:08:07.683416     671 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:08:07 ha-608611 kubelet[671]:         container etcd start failed in pod etcd-ha-608611_kube-system(b479c8e1034fd1754049af8325a8c50b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:08:07 ha-608611 kubelet[671]:  > logger="UnhandledError"
	Oct 09 19:08:07 ha-608611 kubelet[671]: E1009 19:08:07.683446     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-608611" podUID="b479c8e1034fd1754049af8325a8c50b"
	Oct 09 19:08:08 ha-608611 kubelet[671]: E1009 19:08:08.674312     671 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-608611\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611: exit status 2 (297.362221ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-608611" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (368.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-608611" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-608611\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-608611\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-608611\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":nul
l,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list
--output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-608611
helpers_test.go:243: (dbg) docker inspect ha-608611:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	        "Created": "2025-10-09T18:44:43.71277862Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 87136,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:02:02.606525681Z",
	            "FinishedAt": "2025-10-09T19:02:01.288438646Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hostname",
	        "HostsPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hosts",
	        "LogPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c-json.log",
	        "Name": "/ha-608611",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-608611:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-608611",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	                "LowerDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-608611",
	                "Source": "/var/lib/docker/volumes/ha-608611/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-608611",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-608611",
	                "name.minikube.sigs.k8s.io": "ha-608611",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "34e364d29b995e9f397e4ff58ac14a48a876f810f7b517d883d6edcdbb1bf188",
	            "SandboxKey": "/var/run/docker/netns/34e364d29b99",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32793"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32794"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32795"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32796"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-608611": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:4f:68:d2:b9:a8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d41ad8abecfe5e57fea462a2d7f6665aa3879de8bfc3fe0269f712186c14e257",
	                    "EndpointID": "9607201385fb50d883c3f937998cbc9542b588f50f9c40d6bdf9c41bc6baf758",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-608611",
	                        "92fc23109156"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611: exit status 2 (299.190627ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node    │ ha-608611 node add --alsologtostderr -v 5                                                    │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node    │ ha-608611 node stop m02 --alsologtostderr -v 5                                               │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node    │ ha-608611 node start m02 --alsologtostderr -v 5                                              │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node    │ ha-608611 node list --alsologtostderr -v 5                                                   │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:55 UTC │                     │
	│ stop    │ ha-608611 stop --alsologtostderr -v 5                                                        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:55 UTC │ 09 Oct 25 18:55 UTC │
	│ start   │ ha-608611 start --wait true --alsologtostderr -v 5                                           │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:55 UTC │                     │
	│ node    │ ha-608611 node list --alsologtostderr -v 5                                                   │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │                     │
	│ node    │ ha-608611 node delete m03 --alsologtostderr -v 5                                             │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │                     │
	│ stop    │ ha-608611 stop --alsologtostderr -v 5                                                        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 19:02 UTC │ 09 Oct 25 19:02 UTC │
	│ start   │ ha-608611 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 19:02 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:02:02
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:02:02.366634   86934 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:02:02.366900   86934 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:02:02.366909   86934 out.go:374] Setting ErrFile to fd 2...
	I1009 19:02:02.366914   86934 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:02:02.367183   86934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 19:02:02.367673   86934 out.go:368] Setting JSON to false
	I1009 19:02:02.368576   86934 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6270,"bootTime":1760030252,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:02:02.368665   86934 start.go:141] virtualization: kvm guest
	I1009 19:02:02.370893   86934 out.go:179] * [ha-608611] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:02:02.372496   86934 notify.go:220] Checking for updates...
	I1009 19:02:02.372569   86934 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:02:02.374010   86934 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:02:02.375862   86934 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 19:02:02.377311   86934 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 19:02:02.378757   86934 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:02:02.380255   86934 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:02:02.382046   86934 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:02:02.382523   86934 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 19:02:02.405566   86934 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:02:02.405698   86934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:02:02.460511   86934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:02:02.449781611 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:02:02.460617   86934 docker.go:318] overlay module found
	I1009 19:02:02.467934   86934 out.go:179] * Using the docker driver based on existing profile
	I1009 19:02:02.472893   86934 start.go:305] selected driver: docker
	I1009 19:02:02.472930   86934 start.go:925] validating driver "docker" against &{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:02:02.473021   86934 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:02:02.473177   86934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:02:02.530403   86934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:02:02.520535313 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:02:02.530972   86934 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:02:02.530995   86934 cni.go:84] Creating CNI manager for ""
	I1009 19:02:02.531058   86934 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:02:02.531099   86934 start.go:349] cluster config:
	{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1009 19:02:02.536297   86934 out.go:179] * Starting "ha-608611" primary control-plane node in "ha-608611" cluster
	I1009 19:02:02.537921   86934 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 19:02:02.539315   86934 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:02:02.540530   86934 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:02:02.540558   86934 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:02:02.540566   86934 cache.go:64] Caching tarball of preloaded images
	I1009 19:02:02.540649   86934 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:02:02.540659   86934 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:02:02.540644   86934 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:02:02.540747   86934 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 19:02:02.560713   86934 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:02:02.560736   86934 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:02:02.560755   86934 cache.go:242] Successfully downloaded all kic artifacts
	I1009 19:02:02.560776   86934 start.go:360] acquireMachinesLock for ha-608611: {Name:mk7579977ab708dc80cadd5f1683dbd9d0a08d4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:02:02.560826   86934 start.go:364] duration metric: took 34.956µs to acquireMachinesLock for "ha-608611"
	I1009 19:02:02.560843   86934 start.go:96] Skipping create...Using existing machine configuration
	I1009 19:02:02.560848   86934 fix.go:54] fixHost starting: 
	I1009 19:02:02.561074   86934 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 19:02:02.578279   86934 fix.go:112] recreateIfNeeded on ha-608611: state=Stopped err=<nil>
	W1009 19:02:02.578318   86934 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 19:02:02.580033   86934 out.go:252] * Restarting existing docker container for "ha-608611" ...
	I1009 19:02:02.580095   86934 cli_runner.go:164] Run: docker start ha-608611
	I1009 19:02:02.818090   86934 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 19:02:02.837398   86934 kic.go:430] container "ha-608611" state is running.
	I1009 19:02:02.837716   86934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 19:02:02.857081   86934 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 19:02:02.857332   86934 machine.go:93] provisionDockerMachine start ...
	I1009 19:02:02.857395   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:02.875516   86934 main.go:141] libmachine: Using SSH client type: native
	I1009 19:02:02.875763   86934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:02:02.875778   86934 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:02:02.876346   86934 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39628->127.0.0.1:32793: read: connection reset by peer
	I1009 19:02:06.023115   86934 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 19:02:06.023157   86934 ubuntu.go:182] provisioning hostname "ha-608611"
	I1009 19:02:06.023213   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:06.041188   86934 main.go:141] libmachine: Using SSH client type: native
	I1009 19:02:06.041419   86934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:02:06.041437   86934 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-608611 && echo "ha-608611" | sudo tee /etc/hostname
	I1009 19:02:06.195947   86934 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 19:02:06.196039   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:06.214427   86934 main.go:141] libmachine: Using SSH client type: native
	I1009 19:02:06.214707   86934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:02:06.214726   86934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-608611' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-608611/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-608611' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:02:06.359913   86934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:02:06.359938   86934 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 19:02:06.359976   86934 ubuntu.go:190] setting up certificates
	I1009 19:02:06.359987   86934 provision.go:84] configureAuth start
	I1009 19:02:06.360055   86934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 19:02:06.377565   86934 provision.go:143] copyHostCerts
	I1009 19:02:06.377598   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 19:02:06.377621   86934 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 19:02:06.377632   86934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 19:02:06.377706   86934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 19:02:06.377792   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 19:02:06.377809   86934 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 19:02:06.377815   86934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 19:02:06.377841   86934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 19:02:06.377885   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 19:02:06.377901   86934 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 19:02:06.377907   86934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 19:02:06.377930   86934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 19:02:06.377978   86934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.ha-608611 san=[127.0.0.1 192.168.49.2 ha-608611 localhost minikube]
	I1009 19:02:06.551568   86934 provision.go:177] copyRemoteCerts
	I1009 19:02:06.551627   86934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:02:06.551664   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:06.569563   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:06.671559   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:02:06.671624   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 19:02:06.689362   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:02:06.689417   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:02:06.706820   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:02:06.706884   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:02:06.723659   86934 provision.go:87] duration metric: took 363.656182ms to configureAuth
	I1009 19:02:06.723684   86934 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:02:06.723837   86934 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:02:06.723932   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:06.741523   86934 main.go:141] libmachine: Using SSH client type: native
	I1009 19:02:06.741719   86934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:02:06.741733   86934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:02:06.997259   86934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:02:06.997278   86934 machine.go:96] duration metric: took 4.139930505s to provisionDockerMachine
	I1009 19:02:06.997295   86934 start.go:293] postStartSetup for "ha-608611" (driver="docker")
	I1009 19:02:06.997303   86934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:02:06.997364   86934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:02:06.997436   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:07.015165   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:07.117424   86934 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:02:07.121129   86934 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:02:07.121172   86934 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:02:07.121187   86934 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 19:02:07.121231   86934 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 19:02:07.121302   86934 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 19:02:07.121313   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /etc/ssl/certs/148802.pem
	I1009 19:02:07.121398   86934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:02:07.128962   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 19:02:07.146444   86934 start.go:296] duration metric: took 149.135002ms for postStartSetup
	I1009 19:02:07.146528   86934 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:02:07.146561   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:07.164604   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:07.263216   86934 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:02:07.267755   86934 fix.go:56] duration metric: took 4.706900009s for fixHost
	I1009 19:02:07.267794   86934 start.go:83] releasing machines lock for "ha-608611", held for 4.706943222s
	I1009 19:02:07.267857   86934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 19:02:07.284443   86934 ssh_runner.go:195] Run: cat /version.json
	I1009 19:02:07.284488   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:07.284518   86934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:02:07.284564   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:07.302426   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:07.302797   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:07.452812   86934 ssh_runner.go:195] Run: systemctl --version
	I1009 19:02:07.459227   86934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:02:07.492322   86934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:02:07.496837   86934 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:02:07.496893   86934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:02:07.504414   86934 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:02:07.504435   86934 start.go:495] detecting cgroup driver to use...
	I1009 19:02:07.504461   86934 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:02:07.504497   86934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:02:07.518639   86934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:02:07.530028   86934 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:02:07.530080   86934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:02:07.543210   86934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:02:07.554574   86934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:02:07.631689   86934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:02:07.710043   86934 docker.go:234] disabling docker service ...
	I1009 19:02:07.710103   86934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:02:07.723929   86934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:02:07.736312   86934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:02:07.813951   86934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:02:07.891501   86934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:02:07.903630   86934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:02:07.917404   86934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:02:07.917468   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.926188   86934 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:02:07.926260   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.935124   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.943686   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.952342   86934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:02:07.960386   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.969265   86934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.977652   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.986892   86934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:02:07.994317   86934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:02:08.001853   86934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:02:08.079819   86934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:02:08.184066   86934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:02:08.184131   86934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:02:08.188032   86934 start.go:563] Will wait 60s for crictl version
	I1009 19:02:08.188080   86934 ssh_runner.go:195] Run: which crictl
	I1009 19:02:08.191568   86934 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:02:08.215064   86934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:02:08.215130   86934 ssh_runner.go:195] Run: crio --version
	I1009 19:02:08.242668   86934 ssh_runner.go:195] Run: crio --version
	I1009 19:02:08.272310   86934 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:02:08.273867   86934 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:02:08.291028   86934 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:02:08.295020   86934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:02:08.304927   86934 kubeadm.go:883] updating cluster {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:02:08.305037   86934 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:02:08.305076   86934 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:02:08.334586   86934 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:02:08.334605   86934 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:02:08.334646   86934 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:02:08.359864   86934 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:02:08.359884   86934 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:02:08.359891   86934 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:02:08.359982   86934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-608611 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:02:08.360041   86934 ssh_runner.go:195] Run: crio config
	I1009 19:02:08.403513   86934 cni.go:84] Creating CNI manager for ""
	I1009 19:02:08.403536   86934 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:02:08.403553   86934 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:02:08.403581   86934 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-608611 NodeName:ha-608611 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:02:08.403758   86934 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-608611"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:02:08.403826   86934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:02:08.411830   86934 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 19:02:08.411894   86934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:02:08.419468   86934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:02:08.432379   86934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:02:08.445216   86934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 19:02:08.457891   86934 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:02:08.461609   86934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:02:08.471627   86934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:02:08.548747   86934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:02:08.570439   86934 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611 for IP: 192.168.49.2
	I1009 19:02:08.570462   86934 certs.go:195] generating shared ca certs ...
	I1009 19:02:08.570494   86934 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:02:08.570644   86934 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 19:02:08.570699   86934 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 19:02:08.570711   86934 certs.go:257] generating profile certs ...
	I1009 19:02:08.570809   86934 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key
	I1009 19:02:08.570886   86934 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.71ac3d0a
	I1009 19:02:08.570937   86934 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key
	I1009 19:02:08.570950   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:02:08.570974   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:02:08.570990   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:02:08.571008   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:02:08.571026   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:02:08.571045   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:02:08.571062   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:02:08.571080   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:02:08.571169   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 19:02:08.571210   86934 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 19:02:08.571224   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:02:08.571259   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 19:02:08.571305   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:02:08.571336   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 19:02:08.571392   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 19:02:08.571429   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /usr/share/ca-certificates/148802.pem
	I1009 19:02:08.571452   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:02:08.571470   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem -> /usr/share/ca-certificates/14880.pem
	I1009 19:02:08.572252   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:02:08.590519   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:02:08.608788   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:02:08.628771   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:02:08.652296   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1009 19:02:08.669442   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 19:02:08.686413   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:02:08.702970   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:02:08.719872   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 19:02:08.736350   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:02:08.753020   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 19:02:08.770756   86934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:02:08.782846   86934 ssh_runner.go:195] Run: openssl version
	I1009 19:02:08.788680   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:02:08.796773   86934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:02:08.800287   86934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:02:08.800342   86934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:02:08.834331   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:02:08.842576   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 19:02:08.850707   86934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 19:02:08.854375   86934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 19:02:08.854417   86934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 19:02:08.888132   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 19:02:08.896190   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 19:02:08.904560   86934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 19:02:08.908107   86934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 19:02:08.908167   86934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 19:02:08.941616   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:02:08.949683   86934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:02:08.953888   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:02:08.988843   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:02:09.022384   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:02:09.055785   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:02:09.100654   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:02:09.138816   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:02:09.175373   86934 kubeadm.go:400] StartCluster: {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:02:09.175553   86934 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:02:09.175626   86934 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:02:09.203282   86934 cri.go:89] found id: ""
	I1009 19:02:09.203337   86934 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:02:09.211170   86934 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:02:09.211189   86934 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:02:09.211233   86934 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:02:09.218525   86934 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:02:09.218879   86934 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 19:02:09.218998   86934 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-11374/kubeconfig needs updating (will repair): [kubeconfig missing "ha-608611" cluster setting kubeconfig missing "ha-608611" context setting]
	I1009 19:02:09.219307   86934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/kubeconfig: {Name:mke7bf8fc0811179129dfd61e3a963860adf8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:02:09.219795   86934 kapi.go:59] client config for ha-608611: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:02:09.220220   86934 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:02:09.220236   86934 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 19:02:09.220244   86934 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 19:02:09.220251   86934 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:02:09.220258   86934 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 19:02:09.220304   86934 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 19:02:09.220587   86934 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:02:09.228184   86934 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 19:02:09.228212   86934 kubeadm.go:601] duration metric: took 17.018594ms to restartPrimaryControlPlane
	I1009 19:02:09.228221   86934 kubeadm.go:402] duration metric: took 52.859442ms to StartCluster
	I1009 19:02:09.228235   86934 settings.go:142] acquiring lock: {Name:mke1fc24bd3c282bdce5b5999d4611ed242ead0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:02:09.228289   86934 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 19:02:09.228747   86934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/kubeconfig: {Name:mke7bf8fc0811179129dfd61e3a963860adf8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:02:09.228944   86934 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:02:09.229006   86934 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:02:09.229112   86934 addons.go:69] Setting storage-provisioner=true in profile "ha-608611"
	I1009 19:02:09.229129   86934 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:02:09.229158   86934 addons.go:69] Setting default-storageclass=true in profile "ha-608611"
	I1009 19:02:09.229194   86934 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-608611"
	I1009 19:02:09.229132   86934 addons.go:238] Setting addon storage-provisioner=true in "ha-608611"
	I1009 19:02:09.229294   86934 host.go:66] Checking if "ha-608611" exists ...
	I1009 19:02:09.229535   86934 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 19:02:09.229746   86934 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 19:02:09.232398   86934 out.go:179] * Verifying Kubernetes components...
	I1009 19:02:09.234182   86934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:02:09.249828   86934 kapi.go:59] client config for ha-608611: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:02:09.250212   86934 addons.go:238] Setting addon default-storageclass=true in "ha-608611"
	I1009 19:02:09.250254   86934 host.go:66] Checking if "ha-608611" exists ...
	I1009 19:02:09.250729   86934 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 19:02:09.253666   86934 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:02:09.255198   86934 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:02:09.255220   86934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:02:09.255295   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:09.279913   86934 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:02:09.279935   86934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:02:09.279997   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:09.280244   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:09.298795   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:09.340817   86934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:02:09.353728   86934 node_ready.go:35] waiting up to 6m0s for node "ha-608611" to be "Ready" ...
	I1009 19:02:09.392883   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:02:09.410568   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:09.451098   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.451133   86934 retry.go:31] will retry after 367.251438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:09.467582   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.467614   86934 retry.go:31] will retry after 202.583149ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.671071   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:09.728118   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.728165   86934 retry.go:31] will retry after 532.603205ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.819359   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:09.870710   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.870743   86934 retry.go:31] will retry after 279.776339ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.151303   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:10.203393   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.203423   86934 retry.go:31] will retry after 347.914412ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.261624   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:10.312099   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.312161   86934 retry.go:31] will retry after 754.410355ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.551883   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:10.604202   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.604236   86934 retry.go:31] will retry after 610.586718ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:11.067261   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:11.118580   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:11.118609   86934 retry.go:31] will retry after 814.916965ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:11.215892   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:11.267928   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:11.267972   86934 retry.go:31] will retry after 1.45438082s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:11.354562   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:11.934655   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:11.986484   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:11.986513   86934 retry.go:31] will retry after 1.124124769s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:12.723181   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:12.774656   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:12.774689   86934 retry.go:31] will retry after 1.232500279s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:13.111665   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:13.165517   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:13.165552   86934 retry.go:31] will retry after 2.16641371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:13.355245   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:14.007705   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:14.059964   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:14.059992   86934 retry.go:31] will retry after 3.058954256s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:15.332271   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:15.386449   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:15.386473   86934 retry.go:31] will retry after 3.386344457s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:15.854462   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:17.120044   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:17.172191   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:17.172228   86934 retry.go:31] will retry after 5.108857909s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:17.855169   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:18.773686   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:18.825043   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:18.825075   86934 retry.go:31] will retry after 4.328736912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:20.354784   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:22.282235   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:22.336593   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:22.336620   86934 retry.go:31] will retry after 8.469274029s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:22.355192   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:23.154808   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:23.207154   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:23.207192   86934 retry.go:31] will retry after 9.59352501s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:24.854514   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:27.355255   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:29.854449   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:30.806123   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:30.858604   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:30.858637   86934 retry.go:31] will retry after 13.297733582s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:32.354331   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:32.800848   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:32.854427   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:32.854451   86934 retry.go:31] will retry after 8.328873063s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:34.354417   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:36.354493   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:38.354571   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:40.354643   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:41.184043   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:41.237661   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:41.237694   86934 retry.go:31] will retry after 10.702907746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:42.854628   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:44.156959   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:44.208755   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:44.208790   86934 retry.go:31] will retry after 18.065677643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:45.354394   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:47.854450   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:49.854575   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:51.941580   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:51.995763   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:51.995796   86934 retry.go:31] will retry after 22.859549113s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:52.354574   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:54.854455   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:57.354280   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:59.354606   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:01.854286   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:03:02.274776   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:03:02.329455   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:03:02.329481   86934 retry.go:31] will retry after 18.531804756s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:03:03.854398   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:05.855306   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:08.354544   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:10.354642   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:12.854650   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:03:14.855487   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:03:14.910832   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:03:14.910866   86934 retry.go:31] will retry after 23.992226966s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:03:15.354856   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:17.854777   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:19.855067   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:03:20.862242   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:03:20.916094   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:03:20.916120   86934 retry.go:31] will retry after 48.100773528s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:03:22.355103   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:24.355298   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:26.855213   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:29.354698   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:31.854367   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:33.854849   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:36.354516   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:38.354590   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:03:38.903767   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:03:38.956838   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:03:38.956956   86934 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1009 19:03:40.854352   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:42.854763   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:44.855321   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:47.354581   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:49.355061   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:51.854592   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:53.855020   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:56.354334   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:58.354436   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:00.355133   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:02.355211   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:04.854653   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:06.854735   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:04:09.017880   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:04:09.070622   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:09.070759   86934 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 19:04:09.073837   86934 out.go:179] * Enabled addons: 
	I1009 19:04:09.075208   86934 addons.go:514] duration metric: took 1m59.846203175s for enable addons: enabled=[]
	W1009 19:04:09.354738   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:11.854382   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:13.854761   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:15.855263   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:18.354436   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:20.354680   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:22.854757   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:25.354358   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:27.354618   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:29.355201   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:31.854584   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:33.855216   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:36.354515   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:38.355047   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:40.854574   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:42.854919   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:45.354432   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:47.354739   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:49.854455   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:51.854700   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:54.354542   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:56.354729   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:58.355340   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:00.854996   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:03.354655   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:05.354894   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:07.854625   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:09.854988   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:12.354612   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:14.355191   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:16.854672   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:18.855119   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:21.354471   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:23.355067   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:25.854706   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:28.354363   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:30.354952   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:32.854719   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:34.855304   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:37.354583   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:39.355134   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:41.854603   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:44.354384   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:46.354675   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:48.355094   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:50.854601   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:52.854769   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:55.354452   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:57.354754   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:59.854434   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:01.854660   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:03.855216   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:06.354552   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:08.354978   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:10.854742   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:13.354448   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:15.854379   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:17.854464   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:19.854680   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:22.354465   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:24.354554   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:26.854391   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:28.854550   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:30.854630   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:33.354581   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:35.354615   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:37.854978   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:39.855076   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:41.855108   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:43.855311   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:46.355236   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:48.355325   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:50.854629   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:52.854776   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:55.354563   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:57.854716   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:59.854942   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:02.354877   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:04.355253   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:06.854673   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:08.855261   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:11.354618   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:13.355044   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:15.854451   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:17.854909   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:20.354494   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:22.354776   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:24.854505   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:26.854756   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:29.354667   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:31.355071   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:33.854718   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:35.855122   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:38.354669   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:40.355263   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:42.854610   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:44.855295   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:47.354752   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:49.854638   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:51.855251   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:54.354792   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:56.854535   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:58.855239   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:08:01.354815   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:08:03.854572   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:08:05.854724   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:08:08.354483   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:08:09.353866   86934 node_ready.go:38] duration metric: took 6m0.000084484s for node "ha-608611" to be "Ready" ...
	I1009 19:08:09.356453   86934 out.go:203] 
	W1009 19:08:09.357971   86934 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 19:08:09.357991   86934 out.go:285] * 
	W1009 19:08:09.359976   86934 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:08:09.361285   86934 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:08:01 ha-608611 crio[522]: time="2025-10-09T19:08:01.675771859Z" level=info msg="createCtr: removing container 21d147f8f514c771877205dedced65ecb7a1e22d55e73939ea6867699ea9d073" id=a486d5b6-77b4-47f1-b9e1-e9709eb61f2b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:01 ha-608611 crio[522]: time="2025-10-09T19:08:01.675803376Z" level=info msg="createCtr: deleting container 21d147f8f514c771877205dedced65ecb7a1e22d55e73939ea6867699ea9d073 from storage" id=a486d5b6-77b4-47f1-b9e1-e9709eb61f2b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:01 ha-608611 crio[522]: time="2025-10-09T19:08:01.677910763Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-608611_kube-system_cc9d45d79042caf53449ab6317965aad_0" id=a486d5b6-77b4-47f1-b9e1-e9709eb61f2b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:07 ha-608611 crio[522]: time="2025-10-09T19:08:07.65453339Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=23ecfe77-5fcd-4b2b-aeab-194061e38eca name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:08:07 ha-608611 crio[522]: time="2025-10-09T19:08:07.656228172Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=283b6b3d-3ffc-4e51-8dd8-a391b83e32a7 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:08:07 ha-608611 crio[522]: time="2025-10-09T19:08:07.657379299Z" level=info msg="Creating container: kube-system/etcd-ha-608611/etcd" id=4b0fd758-160a-4e22-b7d5-1e7a9919873d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:07 ha-608611 crio[522]: time="2025-10-09T19:08:07.657607876Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:08:07 ha-608611 crio[522]: time="2025-10-09T19:08:07.660987812Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:08:07 ha-608611 crio[522]: time="2025-10-09T19:08:07.661410632Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:08:07 ha-608611 crio[522]: time="2025-10-09T19:08:07.679585429Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=4b0fd758-160a-4e22-b7d5-1e7a9919873d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:07 ha-608611 crio[522]: time="2025-10-09T19:08:07.680952036Z" level=info msg="createCtr: deleting container ID 5f48c2696f05e3f84156e1ed3396575767d1c4b6f73313be59e732cda081faf2 from idIndex" id=4b0fd758-160a-4e22-b7d5-1e7a9919873d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:07 ha-608611 crio[522]: time="2025-10-09T19:08:07.680992477Z" level=info msg="createCtr: removing container 5f48c2696f05e3f84156e1ed3396575767d1c4b6f73313be59e732cda081faf2" id=4b0fd758-160a-4e22-b7d5-1e7a9919873d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:07 ha-608611 crio[522]: time="2025-10-09T19:08:07.681024255Z" level=info msg="createCtr: deleting container 5f48c2696f05e3f84156e1ed3396575767d1c4b6f73313be59e732cda081faf2 from storage" id=4b0fd758-160a-4e22-b7d5-1e7a9919873d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:07 ha-608611 crio[522]: time="2025-10-09T19:08:07.682961411Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-608611_kube-system_b479c8e1034fd1754049af8325a8c50b_0" id=4b0fd758-160a-4e22-b7d5-1e7a9919873d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:11 ha-608611 crio[522]: time="2025-10-09T19:08:11.654554751Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=a87463c3-7899-4e1c-b57b-2bcb42251a04 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:08:11 ha-608611 crio[522]: time="2025-10-09T19:08:11.65556571Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=71b1124c-3b0e-4e79-bd73-ba5c31d46517 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:08:11 ha-608611 crio[522]: time="2025-10-09T19:08:11.656551778Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-608611/kube-apiserver" id=5df9dc29-f464-4ff7-9fc1-8062df79b672 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:11 ha-608611 crio[522]: time="2025-10-09T19:08:11.656799849Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:08:11 ha-608611 crio[522]: time="2025-10-09T19:08:11.660761221Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:08:11 ha-608611 crio[522]: time="2025-10-09T19:08:11.661271798Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:08:11 ha-608611 crio[522]: time="2025-10-09T19:08:11.675878882Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=5df9dc29-f464-4ff7-9fc1-8062df79b672 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:11 ha-608611 crio[522]: time="2025-10-09T19:08:11.67740191Z" level=info msg="createCtr: deleting container ID 9742a44d54deb222ac36a6152d76a2c04049f27372dfef8c32939ffd0c036394 from idIndex" id=5df9dc29-f464-4ff7-9fc1-8062df79b672 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:11 ha-608611 crio[522]: time="2025-10-09T19:08:11.677449723Z" level=info msg="createCtr: removing container 9742a44d54deb222ac36a6152d76a2c04049f27372dfef8c32939ffd0c036394" id=5df9dc29-f464-4ff7-9fc1-8062df79b672 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:11 ha-608611 crio[522]: time="2025-10-09T19:08:11.677490223Z" level=info msg="createCtr: deleting container 9742a44d54deb222ac36a6152d76a2c04049f27372dfef8c32939ffd0c036394 from storage" id=5df9dc29-f464-4ff7-9fc1-8062df79b672 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:11 ha-608611 crio[522]: time="2025-10-09T19:08:11.679643261Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-608611_kube-system_8c1c5aee1432fcfd0e6519753fb0d668_0" id=5df9dc29-f464-4ff7-9fc1-8062df79b672 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:08:11.868702    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:08:11.869332    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:08:11.870853    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:08:11.871363    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:08:11.872866    2184 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:08:11 up  1:50,  0 user,  load average: 0.00, 0.14, 0.15
	Linux ha-608611 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:08:04 ha-608611 kubelet[671]: E1009 19:08:04.291382     671 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-608611?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:08:04 ha-608611 kubelet[671]: I1009 19:08:04.461336     671 kubelet_node_status.go:75] "Attempting to register node" node="ha-608611"
	Oct 09 19:08:04 ha-608611 kubelet[671]: E1009 19:08:04.461782     671 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-608611"
	Oct 09 19:08:06 ha-608611 kubelet[671]: E1009 19:08:06.648845     671 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-608611&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 09 19:08:07 ha-608611 kubelet[671]: E1009 19:08:07.517595     671 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-608611.186ce7e5d1ac7bf7  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-608611,UID:ha-608611,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-608611 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-608611,},FirstTimestamp:2025-10-09 19:02:08.646290423 +0000 UTC m=+0.074054203,LastTimestamp:2025-10-09 19:02:08.646290423 +0000 UTC m=+0.074054203,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-608611,}"
	Oct 09 19:08:07 ha-608611 kubelet[671]: E1009 19:08:07.653371     671 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 19:08:07 ha-608611 kubelet[671]: E1009 19:08:07.683320     671 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:08:07 ha-608611 kubelet[671]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:08:07 ha-608611 kubelet[671]:  > podSandboxID="d7b0b4143624f2e40fa8420bc4baa97f53997144043ca1197badd7726113b7b9"
	Oct 09 19:08:07 ha-608611 kubelet[671]: E1009 19:08:07.683416     671 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:08:07 ha-608611 kubelet[671]:         container etcd start failed in pod etcd-ha-608611_kube-system(b479c8e1034fd1754049af8325a8c50b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:08:07 ha-608611 kubelet[671]:  > logger="UnhandledError"
	Oct 09 19:08:07 ha-608611 kubelet[671]: E1009 19:08:07.683446     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-608611" podUID="b479c8e1034fd1754049af8325a8c50b"
	Oct 09 19:08:08 ha-608611 kubelet[671]: E1009 19:08:08.674312     671 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-608611\" not found"
	Oct 09 19:08:11 ha-608611 kubelet[671]: E1009 19:08:11.292738     671 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-608611?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:08:11 ha-608611 kubelet[671]: I1009 19:08:11.463091     671 kubelet_node_status.go:75] "Attempting to register node" node="ha-608611"
	Oct 09 19:08:11 ha-608611 kubelet[671]: E1009 19:08:11.463487     671 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-608611"
	Oct 09 19:08:11 ha-608611 kubelet[671]: E1009 19:08:11.654012     671 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 19:08:11 ha-608611 kubelet[671]: E1009 19:08:11.679971     671 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:08:11 ha-608611 kubelet[671]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:08:11 ha-608611 kubelet[671]:  > podSandboxID="ac516349e9b506d388b69dc76eeb2ab388dd4861bbc6be0177da37d2c5d29a10"
	Oct 09 19:08:11 ha-608611 kubelet[671]: E1009 19:08:11.680115     671 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:08:11 ha-608611 kubelet[671]:         container kube-apiserver start failed in pod kube-apiserver-ha-608611_kube-system(8c1c5aee1432fcfd0e6519753fb0d668): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:08:11 ha-608611 kubelet[671]:  > logger="UnhandledError"
	Oct 09 19:08:11 ha-608611 kubelet[671]: E1009 19:08:11.680178     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-608611" podUID="8c1c5aee1432fcfd0e6519753fb0d668"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611: exit status 2 (300.704421ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-608611" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (1.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-608611 node add --control-plane --alsologtostderr -v 5: exit status 103 (253.449367ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-608611 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-608611"

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:08:12.303409   91603 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:08:12.303678   91603 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:08:12.303688   91603 out.go:374] Setting ErrFile to fd 2...
	I1009 19:08:12.303693   91603 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:08:12.303905   91603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 19:08:12.304202   91603 mustload.go:65] Loading cluster: ha-608611
	I1009 19:08:12.304541   91603 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:08:12.304916   91603 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 19:08:12.323261   91603 host.go:66] Checking if "ha-608611" exists ...
	I1009 19:08:12.323523   91603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:08:12.385336   91603 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 19:08:12.374008524 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:08:12.385481   91603 api_server.go:166] Checking apiserver status ...
	I1009 19:08:12.385523   91603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:08:12.385554   91603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:08:12.404190   91603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	W1009 19:08:12.508044   91603 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:08:12.509776   91603 out.go:179] * The control-plane node ha-608611 apiserver is not running: (state=Stopped)
	I1009 19:08:12.510948   91603 out.go:179]   To start a cluster, run: "minikube start -p ha-608611"

                                                
                                                
** /stderr **
ha_test.go:609: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-608611 node add --control-plane --alsologtostderr -v 5" : exit status 103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-608611
helpers_test.go:243: (dbg) docker inspect ha-608611:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	        "Created": "2025-10-09T18:44:43.71277862Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 87136,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:02:02.606525681Z",
	            "FinishedAt": "2025-10-09T19:02:01.288438646Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hostname",
	        "HostsPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hosts",
	        "LogPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c-json.log",
	        "Name": "/ha-608611",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-608611:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-608611",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	                "LowerDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-608611",
	                "Source": "/var/lib/docker/volumes/ha-608611/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-608611",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-608611",
	                "name.minikube.sigs.k8s.io": "ha-608611",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "34e364d29b995e9f397e4ff58ac14a48a876f810f7b517d883d6edcdbb1bf188",
	            "SandboxKey": "/var/run/docker/netns/34e364d29b99",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32793"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32794"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32795"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32796"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-608611": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:4f:68:d2:b9:a8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d41ad8abecfe5e57fea462a2d7f6665aa3879de8bfc3fe0269f712186c14e257",
	                    "EndpointID": "9607201385fb50d883c3f937998cbc9542b588f50f9c40d6bdf9c41bc6baf758",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-608611",
	                        "92fc23109156"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611: exit status 2 (296.482097ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node    │ ha-608611 node add --alsologtostderr -v 5                                                    │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node    │ ha-608611 node stop m02 --alsologtostderr -v 5                                               │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node    │ ha-608611 node start m02 --alsologtostderr -v 5                                              │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node    │ ha-608611 node list --alsologtostderr -v 5                                                   │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:55 UTC │                     │
	│ stop    │ ha-608611 stop --alsologtostderr -v 5                                                        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:55 UTC │ 09 Oct 25 18:55 UTC │
	│ start   │ ha-608611 start --wait true --alsologtostderr -v 5                                           │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:55 UTC │                     │
	│ node    │ ha-608611 node list --alsologtostderr -v 5                                                   │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │                     │
	│ node    │ ha-608611 node delete m03 --alsologtostderr -v 5                                             │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │                     │
	│ stop    │ ha-608611 stop --alsologtostderr -v 5                                                        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 19:02 UTC │ 09 Oct 25 19:02 UTC │
	│ start   │ ha-608611 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 19:02 UTC │                     │
	│ node    │ ha-608611 node add --control-plane --alsologtostderr -v 5                                    │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 19:08 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:02:02
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:02:02.366634   86934 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:02:02.366900   86934 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:02:02.366909   86934 out.go:374] Setting ErrFile to fd 2...
	I1009 19:02:02.366914   86934 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:02:02.367183   86934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 19:02:02.367673   86934 out.go:368] Setting JSON to false
	I1009 19:02:02.368576   86934 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6270,"bootTime":1760030252,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:02:02.368665   86934 start.go:141] virtualization: kvm guest
	I1009 19:02:02.370893   86934 out.go:179] * [ha-608611] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:02:02.372496   86934 notify.go:220] Checking for updates...
	I1009 19:02:02.372569   86934 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:02:02.374010   86934 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:02:02.375862   86934 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 19:02:02.377311   86934 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 19:02:02.378757   86934 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:02:02.380255   86934 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:02:02.382046   86934 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:02:02.382523   86934 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 19:02:02.405566   86934 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:02:02.405698   86934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:02:02.460511   86934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:02:02.449781611 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:02:02.460617   86934 docker.go:318] overlay module found
	I1009 19:02:02.467934   86934 out.go:179] * Using the docker driver based on existing profile
	I1009 19:02:02.472893   86934 start.go:305] selected driver: docker
	I1009 19:02:02.472930   86934 start.go:925] validating driver "docker" against &{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:02:02.473021   86934 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:02:02.473177   86934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:02:02.530403   86934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:02:02.520535313 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:02:02.530972   86934 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:02:02.530995   86934 cni.go:84] Creating CNI manager for ""
	I1009 19:02:02.531058   86934 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:02:02.531099   86934 start.go:349] cluster config:
	{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1009 19:02:02.536297   86934 out.go:179] * Starting "ha-608611" primary control-plane node in "ha-608611" cluster
	I1009 19:02:02.537921   86934 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 19:02:02.539315   86934 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:02:02.540530   86934 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:02:02.540558   86934 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:02:02.540566   86934 cache.go:64] Caching tarball of preloaded images
	I1009 19:02:02.540649   86934 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:02:02.540659   86934 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:02:02.540644   86934 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:02:02.540747   86934 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 19:02:02.560713   86934 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:02:02.560736   86934 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:02:02.560755   86934 cache.go:242] Successfully downloaded all kic artifacts
	I1009 19:02:02.560776   86934 start.go:360] acquireMachinesLock for ha-608611: {Name:mk7579977ab708dc80cadd5f1683dbd9d0a08d4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:02:02.560826   86934 start.go:364] duration metric: took 34.956µs to acquireMachinesLock for "ha-608611"
	I1009 19:02:02.560843   86934 start.go:96] Skipping create...Using existing machine configuration
	I1009 19:02:02.560848   86934 fix.go:54] fixHost starting: 
	I1009 19:02:02.561074   86934 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 19:02:02.578279   86934 fix.go:112] recreateIfNeeded on ha-608611: state=Stopped err=<nil>
	W1009 19:02:02.578318   86934 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 19:02:02.580033   86934 out.go:252] * Restarting existing docker container for "ha-608611" ...
	I1009 19:02:02.580095   86934 cli_runner.go:164] Run: docker start ha-608611
	I1009 19:02:02.818090   86934 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 19:02:02.837398   86934 kic.go:430] container "ha-608611" state is running.
	I1009 19:02:02.837716   86934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 19:02:02.857081   86934 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 19:02:02.857332   86934 machine.go:93] provisionDockerMachine start ...
	I1009 19:02:02.857395   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:02.875516   86934 main.go:141] libmachine: Using SSH client type: native
	I1009 19:02:02.875763   86934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:02:02.875778   86934 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:02:02.876346   86934 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39628->127.0.0.1:32793: read: connection reset by peer
	I1009 19:02:06.023115   86934 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 19:02:06.023157   86934 ubuntu.go:182] provisioning hostname "ha-608611"
	I1009 19:02:06.023213   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:06.041188   86934 main.go:141] libmachine: Using SSH client type: native
	I1009 19:02:06.041419   86934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:02:06.041437   86934 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-608611 && echo "ha-608611" | sudo tee /etc/hostname
	I1009 19:02:06.195947   86934 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 19:02:06.196039   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:06.214427   86934 main.go:141] libmachine: Using SSH client type: native
	I1009 19:02:06.214707   86934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:02:06.214726   86934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-608611' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-608611/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-608611' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:02:06.359913   86934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:02:06.359938   86934 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 19:02:06.359976   86934 ubuntu.go:190] setting up certificates
	I1009 19:02:06.359987   86934 provision.go:84] configureAuth start
	I1009 19:02:06.360055   86934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 19:02:06.377565   86934 provision.go:143] copyHostCerts
	I1009 19:02:06.377598   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 19:02:06.377621   86934 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 19:02:06.377632   86934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 19:02:06.377706   86934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 19:02:06.377792   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 19:02:06.377809   86934 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 19:02:06.377815   86934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 19:02:06.377841   86934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 19:02:06.377885   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 19:02:06.377901   86934 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 19:02:06.377907   86934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 19:02:06.377930   86934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 19:02:06.377978   86934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.ha-608611 san=[127.0.0.1 192.168.49.2 ha-608611 localhost minikube]
	I1009 19:02:06.551568   86934 provision.go:177] copyRemoteCerts
	I1009 19:02:06.551627   86934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:02:06.551664   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:06.569563   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:06.671559   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:02:06.671624   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 19:02:06.689362   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:02:06.689417   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:02:06.706820   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:02:06.706884   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:02:06.723659   86934 provision.go:87] duration metric: took 363.656182ms to configureAuth
	I1009 19:02:06.723684   86934 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:02:06.723837   86934 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:02:06.723932   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:06.741523   86934 main.go:141] libmachine: Using SSH client type: native
	I1009 19:02:06.741719   86934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:02:06.741733   86934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:02:06.997259   86934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:02:06.997278   86934 machine.go:96] duration metric: took 4.139930505s to provisionDockerMachine
	I1009 19:02:06.997295   86934 start.go:293] postStartSetup for "ha-608611" (driver="docker")
	I1009 19:02:06.997303   86934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:02:06.997364   86934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:02:06.997436   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:07.015165   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:07.117424   86934 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:02:07.121129   86934 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:02:07.121172   86934 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:02:07.121187   86934 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 19:02:07.121231   86934 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 19:02:07.121302   86934 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 19:02:07.121313   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /etc/ssl/certs/148802.pem
	I1009 19:02:07.121398   86934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:02:07.128962   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 19:02:07.146444   86934 start.go:296] duration metric: took 149.135002ms for postStartSetup
	I1009 19:02:07.146528   86934 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:02:07.146561   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:07.164604   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:07.263216   86934 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:02:07.267755   86934 fix.go:56] duration metric: took 4.706900009s for fixHost
	I1009 19:02:07.267794   86934 start.go:83] releasing machines lock for "ha-608611", held for 4.706943222s
	I1009 19:02:07.267857   86934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 19:02:07.284443   86934 ssh_runner.go:195] Run: cat /version.json
	I1009 19:02:07.284488   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:07.284518   86934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:02:07.284564   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:07.302426   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:07.302797   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:07.452812   86934 ssh_runner.go:195] Run: systemctl --version
	I1009 19:02:07.459227   86934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:02:07.492322   86934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:02:07.496837   86934 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:02:07.496893   86934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:02:07.504414   86934 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:02:07.504435   86934 start.go:495] detecting cgroup driver to use...
	I1009 19:02:07.504461   86934 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:02:07.504497   86934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:02:07.518639   86934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:02:07.530028   86934 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:02:07.530080   86934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:02:07.543210   86934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:02:07.554574   86934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:02:07.631689   86934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:02:07.710043   86934 docker.go:234] disabling docker service ...
	I1009 19:02:07.710103   86934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:02:07.723929   86934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:02:07.736312   86934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:02:07.813951   86934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:02:07.891501   86934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:02:07.903630   86934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:02:07.917404   86934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:02:07.917468   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.926188   86934 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:02:07.926260   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.935124   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.943686   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.952342   86934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:02:07.960386   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.969265   86934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.977652   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.986892   86934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:02:07.994317   86934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:02:08.001853   86934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:02:08.079819   86934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:02:08.184066   86934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:02:08.184131   86934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:02:08.188032   86934 start.go:563] Will wait 60s for crictl version
	I1009 19:02:08.188080   86934 ssh_runner.go:195] Run: which crictl
	I1009 19:02:08.191568   86934 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:02:08.215064   86934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:02:08.215130   86934 ssh_runner.go:195] Run: crio --version
	I1009 19:02:08.242668   86934 ssh_runner.go:195] Run: crio --version
	I1009 19:02:08.272310   86934 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:02:08.273867   86934 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:02:08.291028   86934 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:02:08.295020   86934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:02:08.304927   86934 kubeadm.go:883] updating cluster {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:02:08.305037   86934 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:02:08.305076   86934 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:02:08.334586   86934 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:02:08.334605   86934 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:02:08.334646   86934 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:02:08.359864   86934 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:02:08.359884   86934 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:02:08.359891   86934 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:02:08.359982   86934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-608611 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:02:08.360041   86934 ssh_runner.go:195] Run: crio config
	I1009 19:02:08.403513   86934 cni.go:84] Creating CNI manager for ""
	I1009 19:02:08.403536   86934 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:02:08.403553   86934 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:02:08.403581   86934 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-608611 NodeName:ha-608611 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:02:08.403758   86934 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-608611"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:02:08.403826   86934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:02:08.411830   86934 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 19:02:08.411894   86934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:02:08.419468   86934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:02:08.432379   86934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:02:08.445216   86934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 19:02:08.457891   86934 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:02:08.461609   86934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:02:08.471627   86934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:02:08.548747   86934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:02:08.570439   86934 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611 for IP: 192.168.49.2
	I1009 19:02:08.570462   86934 certs.go:195] generating shared ca certs ...
	I1009 19:02:08.570494   86934 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:02:08.570644   86934 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 19:02:08.570699   86934 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 19:02:08.570711   86934 certs.go:257] generating profile certs ...
	I1009 19:02:08.570809   86934 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key
	I1009 19:02:08.570886   86934 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.71ac3d0a
	I1009 19:02:08.570937   86934 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key
	I1009 19:02:08.570950   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:02:08.570974   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:02:08.570990   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:02:08.571008   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:02:08.571026   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:02:08.571045   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:02:08.571062   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:02:08.571080   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:02:08.571169   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 19:02:08.571210   86934 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 19:02:08.571224   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:02:08.571259   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 19:02:08.571305   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:02:08.571336   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 19:02:08.571392   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 19:02:08.571429   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /usr/share/ca-certificates/148802.pem
	I1009 19:02:08.571452   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:02:08.571470   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem -> /usr/share/ca-certificates/14880.pem
	I1009 19:02:08.572252   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:02:08.590519   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:02:08.608788   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:02:08.628771   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:02:08.652296   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1009 19:02:08.669442   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 19:02:08.686413   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:02:08.702970   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:02:08.719872   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 19:02:08.736350   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:02:08.753020   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 19:02:08.770756   86934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:02:08.782846   86934 ssh_runner.go:195] Run: openssl version
	I1009 19:02:08.788680   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:02:08.796773   86934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:02:08.800287   86934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:02:08.800342   86934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:02:08.834331   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:02:08.842576   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 19:02:08.850707   86934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 19:02:08.854375   86934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 19:02:08.854417   86934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 19:02:08.888132   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 19:02:08.896190   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 19:02:08.904560   86934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 19:02:08.908107   86934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 19:02:08.908167   86934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 19:02:08.941616   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:02:08.949683   86934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:02:08.953888   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:02:08.988843   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:02:09.022384   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:02:09.055785   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:02:09.100654   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:02:09.138816   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:02:09.175373   86934 kubeadm.go:400] StartCluster: {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:02:09.175553   86934 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:02:09.175626   86934 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:02:09.203282   86934 cri.go:89] found id: ""
	I1009 19:02:09.203337   86934 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:02:09.211170   86934 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:02:09.211189   86934 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:02:09.211233   86934 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:02:09.218525   86934 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:02:09.218879   86934 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 19:02:09.218998   86934 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-11374/kubeconfig needs updating (will repair): [kubeconfig missing "ha-608611" cluster setting kubeconfig missing "ha-608611" context setting]
	I1009 19:02:09.219307   86934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/kubeconfig: {Name:mke7bf8fc0811179129dfd61e3a963860adf8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:02:09.219795   86934 kapi.go:59] client config for ha-608611: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:02:09.220220   86934 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:02:09.220236   86934 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 19:02:09.220244   86934 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 19:02:09.220251   86934 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:02:09.220258   86934 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 19:02:09.220304   86934 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 19:02:09.220587   86934 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:02:09.228184   86934 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 19:02:09.228212   86934 kubeadm.go:601] duration metric: took 17.018594ms to restartPrimaryControlPlane
	I1009 19:02:09.228221   86934 kubeadm.go:402] duration metric: took 52.859442ms to StartCluster
	I1009 19:02:09.228235   86934 settings.go:142] acquiring lock: {Name:mke1fc24bd3c282bdce5b5999d4611ed242ead0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:02:09.228289   86934 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 19:02:09.228747   86934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/kubeconfig: {Name:mke7bf8fc0811179129dfd61e3a963860adf8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:02:09.228944   86934 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:02:09.229006   86934 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:02:09.229112   86934 addons.go:69] Setting storage-provisioner=true in profile "ha-608611"
	I1009 19:02:09.229129   86934 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:02:09.229158   86934 addons.go:69] Setting default-storageclass=true in profile "ha-608611"
	I1009 19:02:09.229194   86934 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-608611"
	I1009 19:02:09.229132   86934 addons.go:238] Setting addon storage-provisioner=true in "ha-608611"
	I1009 19:02:09.229294   86934 host.go:66] Checking if "ha-608611" exists ...
	I1009 19:02:09.229535   86934 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 19:02:09.229746   86934 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 19:02:09.232398   86934 out.go:179] * Verifying Kubernetes components...
	I1009 19:02:09.234182   86934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:02:09.249828   86934 kapi.go:59] client config for ha-608611: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:02:09.250212   86934 addons.go:238] Setting addon default-storageclass=true in "ha-608611"
	I1009 19:02:09.250254   86934 host.go:66] Checking if "ha-608611" exists ...
	I1009 19:02:09.250729   86934 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 19:02:09.253666   86934 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:02:09.255198   86934 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:02:09.255220   86934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:02:09.255295   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:09.279913   86934 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:02:09.279935   86934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:02:09.279997   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:09.280244   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:09.298795   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:09.340817   86934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:02:09.353728   86934 node_ready.go:35] waiting up to 6m0s for node "ha-608611" to be "Ready" ...
	I1009 19:02:09.392883   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:02:09.410568   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:09.451098   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.451133   86934 retry.go:31] will retry after 367.251438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:09.467582   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.467614   86934 retry.go:31] will retry after 202.583149ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.671071   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:09.728118   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.728165   86934 retry.go:31] will retry after 532.603205ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.819359   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:09.870710   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.870743   86934 retry.go:31] will retry after 279.776339ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.151303   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:10.203393   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.203423   86934 retry.go:31] will retry after 347.914412ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.261624   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:10.312099   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.312161   86934 retry.go:31] will retry after 754.410355ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.551883   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:10.604202   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.604236   86934 retry.go:31] will retry after 610.586718ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:11.067261   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:11.118580   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:11.118609   86934 retry.go:31] will retry after 814.916965ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:11.215892   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:11.267928   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:11.267972   86934 retry.go:31] will retry after 1.45438082s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:11.354562   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:11.934655   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:11.986484   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:11.986513   86934 retry.go:31] will retry after 1.124124769s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:12.723181   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:12.774656   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:12.774689   86934 retry.go:31] will retry after 1.232500279s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:13.111665   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:13.165517   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:13.165552   86934 retry.go:31] will retry after 2.16641371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:13.355245   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:14.007705   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:14.059964   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:14.059992   86934 retry.go:31] will retry after 3.058954256s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:15.332271   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:15.386449   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:15.386473   86934 retry.go:31] will retry after 3.386344457s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:15.854462   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:17.120044   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:17.172191   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:17.172228   86934 retry.go:31] will retry after 5.108857909s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:17.855169   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:18.773686   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:18.825043   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:18.825075   86934 retry.go:31] will retry after 4.328736912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:20.354784   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:22.282235   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:22.336593   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:22.336620   86934 retry.go:31] will retry after 8.469274029s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:22.355192   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:23.154808   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:23.207154   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:23.207192   86934 retry.go:31] will retry after 9.59352501s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:24.854514   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:27.355255   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:29.854449   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:30.806123   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:30.858604   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:30.858637   86934 retry.go:31] will retry after 13.297733582s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:32.354331   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:32.800848   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:32.854427   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:32.854451   86934 retry.go:31] will retry after 8.328873063s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:34.354417   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:36.354493   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:38.354571   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:40.354643   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:41.184043   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:41.237661   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:41.237694   86934 retry.go:31] will retry after 10.702907746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:42.854628   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:44.156959   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:44.208755   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:44.208790   86934 retry.go:31] will retry after 18.065677643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:45.354394   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:47.854450   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:49.854575   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:51.941580   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:51.995763   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:51.995796   86934 retry.go:31] will retry after 22.859549113s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:52.354574   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:54.854455   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:57.354280   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:59.354606   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:01.854286   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:03:02.274776   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:03:02.329455   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:03:02.329481   86934 retry.go:31] will retry after 18.531804756s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:03:03.854398   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:05.855306   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:08.354544   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:10.354642   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:12.854650   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:03:14.855487   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:03:14.910832   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:03:14.910866   86934 retry.go:31] will retry after 23.992226966s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:03:15.354856   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:17.854777   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:19.855067   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:03:20.862242   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:03:20.916094   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:03:20.916120   86934 retry.go:31] will retry after 48.100773528s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:03:22.355103   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:24.355298   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:26.855213   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:29.354698   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:31.854367   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:33.854849   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:36.354516   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:38.354590   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:03:38.903767   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:03:38.956838   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:03:38.956956   86934 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1009 19:03:40.854352   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:42.854763   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:44.855321   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:47.354581   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:49.355061   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:51.854592   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:53.855020   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:56.354334   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:58.354436   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:00.355133   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:02.355211   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:04.854653   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:06.854735   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:04:09.017880   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:04:09.070622   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:09.070759   86934 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 19:04:09.073837   86934 out.go:179] * Enabled addons: 
	I1009 19:04:09.075208   86934 addons.go:514] duration metric: took 1m59.846203175s for enable addons: enabled=[]
	W1009 19:04:09.354738   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:11.854382   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:13.854761   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:15.855263   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:18.354436   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:20.354680   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:22.854757   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:25.354358   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:27.354618   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:29.355201   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:31.854584   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:33.855216   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:36.354515   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:38.355047   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:40.854574   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:42.854919   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:45.354432   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:47.354739   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:49.854455   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:51.854700   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:54.354542   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:56.354729   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:58.355340   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:00.854996   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:03.354655   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:05.354894   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:07.854625   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:09.854988   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:12.354612   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:14.355191   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:16.854672   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:18.855119   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:21.354471   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:23.355067   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:25.854706   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:28.354363   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:30.354952   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:32.854719   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:34.855304   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:37.354583   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:39.355134   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:41.854603   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:44.354384   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:46.354675   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:48.355094   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:50.854601   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:52.854769   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:55.354452   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:57.354754   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:59.854434   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:01.854660   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:03.855216   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:06.354552   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:08.354978   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:10.854742   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:13.354448   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:15.854379   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:17.854464   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:19.854680   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:22.354465   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:24.354554   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:26.854391   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:28.854550   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:30.854630   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:33.354581   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:35.354615   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:37.854978   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:39.855076   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:41.855108   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:43.855311   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:46.355236   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:48.355325   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:50.854629   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:52.854776   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:55.354563   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:57.854716   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:59.854942   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:02.354877   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:04.355253   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:06.854673   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:08.855261   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:11.354618   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:13.355044   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:15.854451   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:17.854909   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:20.354494   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:22.354776   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:24.854505   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:26.854756   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:29.354667   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:31.355071   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:33.854718   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:35.855122   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:38.354669   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:40.355263   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:42.854610   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:44.855295   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:47.354752   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:49.854638   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:51.855251   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:54.354792   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:56.854535   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:58.855239   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:08:01.354815   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:08:03.854572   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:08:05.854724   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:08:08.354483   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:08:09.353866   86934 node_ready.go:38] duration metric: took 6m0.000084484s for node "ha-608611" to be "Ready" ...
	I1009 19:08:09.356453   86934 out.go:203] 
	W1009 19:08:09.357971   86934 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 19:08:09.357991   86934 out.go:285] * 
	W1009 19:08:09.359976   86934 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:08:09.361285   86934 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:08:07 ha-608611 crio[522]: time="2025-10-09T19:08:07.680992477Z" level=info msg="createCtr: removing container 5f48c2696f05e3f84156e1ed3396575767d1c4b6f73313be59e732cda081faf2" id=4b0fd758-160a-4e22-b7d5-1e7a9919873d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:07 ha-608611 crio[522]: time="2025-10-09T19:08:07.681024255Z" level=info msg="createCtr: deleting container 5f48c2696f05e3f84156e1ed3396575767d1c4b6f73313be59e732cda081faf2 from storage" id=4b0fd758-160a-4e22-b7d5-1e7a9919873d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:07 ha-608611 crio[522]: time="2025-10-09T19:08:07.682961411Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-608611_kube-system_b479c8e1034fd1754049af8325a8c50b_0" id=4b0fd758-160a-4e22-b7d5-1e7a9919873d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:11 ha-608611 crio[522]: time="2025-10-09T19:08:11.654554751Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=a87463c3-7899-4e1c-b57b-2bcb42251a04 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:08:11 ha-608611 crio[522]: time="2025-10-09T19:08:11.65556571Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=71b1124c-3b0e-4e79-bd73-ba5c31d46517 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:08:11 ha-608611 crio[522]: time="2025-10-09T19:08:11.656551778Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-608611/kube-apiserver" id=5df9dc29-f464-4ff7-9fc1-8062df79b672 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:11 ha-608611 crio[522]: time="2025-10-09T19:08:11.656799849Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:08:11 ha-608611 crio[522]: time="2025-10-09T19:08:11.660761221Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:08:11 ha-608611 crio[522]: time="2025-10-09T19:08:11.661271798Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:08:11 ha-608611 crio[522]: time="2025-10-09T19:08:11.675878882Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=5df9dc29-f464-4ff7-9fc1-8062df79b672 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:11 ha-608611 crio[522]: time="2025-10-09T19:08:11.67740191Z" level=info msg="createCtr: deleting container ID 9742a44d54deb222ac36a6152d76a2c04049f27372dfef8c32939ffd0c036394 from idIndex" id=5df9dc29-f464-4ff7-9fc1-8062df79b672 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:11 ha-608611 crio[522]: time="2025-10-09T19:08:11.677449723Z" level=info msg="createCtr: removing container 9742a44d54deb222ac36a6152d76a2c04049f27372dfef8c32939ffd0c036394" id=5df9dc29-f464-4ff7-9fc1-8062df79b672 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:11 ha-608611 crio[522]: time="2025-10-09T19:08:11.677490223Z" level=info msg="createCtr: deleting container 9742a44d54deb222ac36a6152d76a2c04049f27372dfef8c32939ffd0c036394 from storage" id=5df9dc29-f464-4ff7-9fc1-8062df79b672 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:11 ha-608611 crio[522]: time="2025-10-09T19:08:11.679643261Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-608611_kube-system_8c1c5aee1432fcfd0e6519753fb0d668_0" id=5df9dc29-f464-4ff7-9fc1-8062df79b672 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:12 ha-608611 crio[522]: time="2025-10-09T19:08:12.654162481Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=96378813-22fb-43ac-bd57-1d04896985a5 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:08:12 ha-608611 crio[522]: time="2025-10-09T19:08:12.655066988Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=87692678-10bd-4e0c-a8d1-16e95cdb6c3c name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:08:12 ha-608611 crio[522]: time="2025-10-09T19:08:12.656021697Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-608611/kube-scheduler" id=cd1e22c2-c7cb-4193-93ca-13cba4f371b2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:12 ha-608611 crio[522]: time="2025-10-09T19:08:12.656279613Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:08:12 ha-608611 crio[522]: time="2025-10-09T19:08:12.659482213Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:08:12 ha-608611 crio[522]: time="2025-10-09T19:08:12.659893836Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:08:12 ha-608611 crio[522]: time="2025-10-09T19:08:12.676193549Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=cd1e22c2-c7cb-4193-93ca-13cba4f371b2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:12 ha-608611 crio[522]: time="2025-10-09T19:08:12.677801834Z" level=info msg="createCtr: deleting container ID 97744c9a435e1e305f461cf739ebe846fcadc48a6b97c7cb9d66fd2684e35caf from idIndex" id=cd1e22c2-c7cb-4193-93ca-13cba4f371b2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:12 ha-608611 crio[522]: time="2025-10-09T19:08:12.67783707Z" level=info msg="createCtr: removing container 97744c9a435e1e305f461cf739ebe846fcadc48a6b97c7cb9d66fd2684e35caf" id=cd1e22c2-c7cb-4193-93ca-13cba4f371b2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:12 ha-608611 crio[522]: time="2025-10-09T19:08:12.677867121Z" level=info msg="createCtr: deleting container 97744c9a435e1e305f461cf739ebe846fcadc48a6b97c7cb9d66fd2684e35caf from storage" id=cd1e22c2-c7cb-4193-93ca-13cba4f371b2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:12 ha-608611 crio[522]: time="2025-10-09T19:08:12.680178315Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-608611_kube-system_aa829d6ea417a48ecaa6f5cad3254d94_0" id=cd1e22c2-c7cb-4193-93ca-13cba4f371b2 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:08:13.397002    2358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:08:13.397510    2358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:08:13.399097    2358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:08:13.399591    2358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:08:13.401432    2358 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:08:13 up  1:50,  0 user,  load average: 0.00, 0.14, 0.15
	Linux ha-608611 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:08:07 ha-608611 kubelet[671]:  > podSandboxID="d7b0b4143624f2e40fa8420bc4baa97f53997144043ca1197badd7726113b7b9"
	Oct 09 19:08:07 ha-608611 kubelet[671]: E1009 19:08:07.683416     671 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:08:07 ha-608611 kubelet[671]:         container etcd start failed in pod etcd-ha-608611_kube-system(b479c8e1034fd1754049af8325a8c50b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:08:07 ha-608611 kubelet[671]:  > logger="UnhandledError"
	Oct 09 19:08:07 ha-608611 kubelet[671]: E1009 19:08:07.683446     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-608611" podUID="b479c8e1034fd1754049af8325a8c50b"
	Oct 09 19:08:08 ha-608611 kubelet[671]: E1009 19:08:08.674312     671 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-608611\" not found"
	Oct 09 19:08:11 ha-608611 kubelet[671]: E1009 19:08:11.292738     671 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-608611?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:08:11 ha-608611 kubelet[671]: I1009 19:08:11.463091     671 kubelet_node_status.go:75] "Attempting to register node" node="ha-608611"
	Oct 09 19:08:11 ha-608611 kubelet[671]: E1009 19:08:11.463487     671 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-608611"
	Oct 09 19:08:11 ha-608611 kubelet[671]: E1009 19:08:11.654012     671 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 19:08:11 ha-608611 kubelet[671]: E1009 19:08:11.679971     671 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:08:11 ha-608611 kubelet[671]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:08:11 ha-608611 kubelet[671]:  > podSandboxID="ac516349e9b506d388b69dc76eeb2ab388dd4861bbc6be0177da37d2c5d29a10"
	Oct 09 19:08:11 ha-608611 kubelet[671]: E1009 19:08:11.680115     671 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:08:11 ha-608611 kubelet[671]:         container kube-apiserver start failed in pod kube-apiserver-ha-608611_kube-system(8c1c5aee1432fcfd0e6519753fb0d668): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:08:11 ha-608611 kubelet[671]:  > logger="UnhandledError"
	Oct 09 19:08:11 ha-608611 kubelet[671]: E1009 19:08:11.680178     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-608611" podUID="8c1c5aee1432fcfd0e6519753fb0d668"
	Oct 09 19:08:12 ha-608611 kubelet[671]: E1009 19:08:12.653706     671 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 19:08:12 ha-608611 kubelet[671]: E1009 19:08:12.680457     671 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:08:12 ha-608611 kubelet[671]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:08:12 ha-608611 kubelet[671]:  > podSandboxID="2815d9108b060ab5e9615f041c0109d9325e4b92666a1d711f35f61789cf6add"
	Oct 09 19:08:12 ha-608611 kubelet[671]: E1009 19:08:12.680578     671 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:08:12 ha-608611 kubelet[671]:         container kube-scheduler start failed in pod kube-scheduler-ha-608611_kube-system(aa829d6ea417a48ecaa6f5cad3254d94): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:08:12 ha-608611 kubelet[671]:  > logger="UnhandledError"
	Oct 09 19:08:12 ha-608611 kubelet[671]: E1009 19:08:12.680618     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-608611" podUID="aa829d6ea417a48ecaa6f5cad3254d94"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611: exit status 2 (297.075635ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-608611" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (1.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-608611" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-608611\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-608611\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nf
sshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-608611\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonIm
ages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-608611" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-608611\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-608611\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-608611\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\
"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --o
utput json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-608611
helpers_test.go:243: (dbg) docker inspect ha-608611:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	        "Created": "2025-10-09T18:44:43.71277862Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 87136,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:02:02.606525681Z",
	            "FinishedAt": "2025-10-09T19:02:01.288438646Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hostname",
	        "HostsPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/hosts",
	        "LogPath": "/var/lib/docker/containers/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c/92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c-json.log",
	        "Name": "/ha-608611",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-608611:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-608611",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "92fc23109156d449908e3553a7ff0a4906314bbf6a033ad0cfc40a8d028b381c",
	                "LowerDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4c047624c81a0a6733ec3113ebdad87d1048ac354138b37cd3cb16477fc908e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-608611",
	                "Source": "/var/lib/docker/volumes/ha-608611/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-608611",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-608611",
	                "name.minikube.sigs.k8s.io": "ha-608611",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "34e364d29b995e9f397e4ff58ac14a48a876f810f7b517d883d6edcdbb1bf188",
	            "SandboxKey": "/var/run/docker/netns/34e364d29b99",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32793"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32794"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32795"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32796"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-608611": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:4f:68:d2:b9:a8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d41ad8abecfe5e57fea462a2d7f6665aa3879de8bfc3fe0269f712186c14e257",
	                    "EndpointID": "9607201385fb50d883c3f937998cbc9542b588f50f9c40d6bdf9c41bc6baf758",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-608611",
	                        "92fc23109156"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-608611 -n ha-608611: exit status 2 (294.274307ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-608611 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:53 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ kubectl │ ha-608611 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node    │ ha-608611 node add --alsologtostderr -v 5                                                    │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node    │ ha-608611 node stop m02 --alsologtostderr -v 5                                               │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node    │ ha-608611 node start m02 --alsologtostderr -v 5                                              │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:54 UTC │                     │
	│ node    │ ha-608611 node list --alsologtostderr -v 5                                                   │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:55 UTC │                     │
	│ stop    │ ha-608611 stop --alsologtostderr -v 5                                                        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:55 UTC │ 09 Oct 25 18:55 UTC │
	│ start   │ ha-608611 start --wait true --alsologtostderr -v 5                                           │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 18:55 UTC │                     │
	│ node    │ ha-608611 node list --alsologtostderr -v 5                                                   │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │                     │
	│ node    │ ha-608611 node delete m03 --alsologtostderr -v 5                                             │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 19:01 UTC │                     │
	│ stop    │ ha-608611 stop --alsologtostderr -v 5                                                        │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 19:02 UTC │ 09 Oct 25 19:02 UTC │
	│ start   │ ha-608611 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 19:02 UTC │                     │
	│ node    │ ha-608611 node add --control-plane --alsologtostderr -v 5                                    │ ha-608611 │ jenkins │ v1.37.0 │ 09 Oct 25 19:08 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:02:02
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:02:02.366634   86934 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:02:02.366900   86934 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:02:02.366909   86934 out.go:374] Setting ErrFile to fd 2...
	I1009 19:02:02.366914   86934 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:02:02.367183   86934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 19:02:02.367673   86934 out.go:368] Setting JSON to false
	I1009 19:02:02.368576   86934 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6270,"bootTime":1760030252,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:02:02.368665   86934 start.go:141] virtualization: kvm guest
	I1009 19:02:02.370893   86934 out.go:179] * [ha-608611] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:02:02.372496   86934 notify.go:220] Checking for updates...
	I1009 19:02:02.372569   86934 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:02:02.374010   86934 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:02:02.375862   86934 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 19:02:02.377311   86934 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 19:02:02.378757   86934 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:02:02.380255   86934 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:02:02.382046   86934 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:02:02.382523   86934 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 19:02:02.405566   86934 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:02:02.405698   86934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:02:02.460511   86934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:02:02.449781611 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:02:02.460617   86934 docker.go:318] overlay module found
	I1009 19:02:02.467934   86934 out.go:179] * Using the docker driver based on existing profile
	I1009 19:02:02.472893   86934 start.go:305] selected driver: docker
	I1009 19:02:02.472930   86934 start.go:925] validating driver "docker" against &{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:02:02.473021   86934 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:02:02.473177   86934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:02:02.530403   86934 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:02:02.520535313 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:02:02.530972   86934 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:02:02.530995   86934 cni.go:84] Creating CNI manager for ""
	I1009 19:02:02.531058   86934 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:02:02.531099   86934 start.go:349] cluster config:
	{Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1009 19:02:02.536297   86934 out.go:179] * Starting "ha-608611" primary control-plane node in "ha-608611" cluster
	I1009 19:02:02.537921   86934 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 19:02:02.539315   86934 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:02:02.540530   86934 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:02:02.540558   86934 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:02:02.540566   86934 cache.go:64] Caching tarball of preloaded images
	I1009 19:02:02.540649   86934 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:02:02.540659   86934 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:02:02.540644   86934 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:02:02.540747   86934 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 19:02:02.560713   86934 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:02:02.560736   86934 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:02:02.560755   86934 cache.go:242] Successfully downloaded all kic artifacts
	I1009 19:02:02.560776   86934 start.go:360] acquireMachinesLock for ha-608611: {Name:mk7579977ab708dc80cadd5f1683dbd9d0a08d4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:02:02.560826   86934 start.go:364] duration metric: took 34.956µs to acquireMachinesLock for "ha-608611"
	I1009 19:02:02.560843   86934 start.go:96] Skipping create...Using existing machine configuration
	I1009 19:02:02.560848   86934 fix.go:54] fixHost starting: 
	I1009 19:02:02.561074   86934 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 19:02:02.578279   86934 fix.go:112] recreateIfNeeded on ha-608611: state=Stopped err=<nil>
	W1009 19:02:02.578318   86934 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 19:02:02.580033   86934 out.go:252] * Restarting existing docker container for "ha-608611" ...
	I1009 19:02:02.580095   86934 cli_runner.go:164] Run: docker start ha-608611
	I1009 19:02:02.818090   86934 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 19:02:02.837398   86934 kic.go:430] container "ha-608611" state is running.
	I1009 19:02:02.837716   86934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 19:02:02.857081   86934 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/config.json ...
	I1009 19:02:02.857332   86934 machine.go:93] provisionDockerMachine start ...
	I1009 19:02:02.857395   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:02.875516   86934 main.go:141] libmachine: Using SSH client type: native
	I1009 19:02:02.875763   86934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:02:02.875778   86934 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:02:02.876346   86934 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39628->127.0.0.1:32793: read: connection reset by peer
	I1009 19:02:06.023115   86934 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 19:02:06.023157   86934 ubuntu.go:182] provisioning hostname "ha-608611"
	I1009 19:02:06.023213   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:06.041188   86934 main.go:141] libmachine: Using SSH client type: native
	I1009 19:02:06.041419   86934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:02:06.041437   86934 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-608611 && echo "ha-608611" | sudo tee /etc/hostname
	I1009 19:02:06.195947   86934 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-608611
	
	I1009 19:02:06.196039   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:06.214427   86934 main.go:141] libmachine: Using SSH client type: native
	I1009 19:02:06.214707   86934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:02:06.214726   86934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-608611' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-608611/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-608611' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:02:06.359913   86934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:02:06.359938   86934 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 19:02:06.359976   86934 ubuntu.go:190] setting up certificates
	I1009 19:02:06.359987   86934 provision.go:84] configureAuth start
	I1009 19:02:06.360055   86934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 19:02:06.377565   86934 provision.go:143] copyHostCerts
	I1009 19:02:06.377598   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 19:02:06.377621   86934 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 19:02:06.377632   86934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 19:02:06.377706   86934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 19:02:06.377792   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 19:02:06.377809   86934 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 19:02:06.377815   86934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 19:02:06.377841   86934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 19:02:06.377885   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 19:02:06.377901   86934 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 19:02:06.377907   86934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 19:02:06.377930   86934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 19:02:06.377978   86934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.ha-608611 san=[127.0.0.1 192.168.49.2 ha-608611 localhost minikube]
	I1009 19:02:06.551568   86934 provision.go:177] copyRemoteCerts
	I1009 19:02:06.551627   86934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:02:06.551664   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:06.569563   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:06.671559   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:02:06.671624   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 19:02:06.689362   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:02:06.689417   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:02:06.706820   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:02:06.706884   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:02:06.723659   86934 provision.go:87] duration metric: took 363.656182ms to configureAuth
	I1009 19:02:06.723684   86934 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:02:06.723837   86934 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:02:06.723932   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:06.741523   86934 main.go:141] libmachine: Using SSH client type: native
	I1009 19:02:06.741719   86934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1009 19:02:06.741733   86934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:02:06.997259   86934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:02:06.997278   86934 machine.go:96] duration metric: took 4.139930505s to provisionDockerMachine
	I1009 19:02:06.997295   86934 start.go:293] postStartSetup for "ha-608611" (driver="docker")
	I1009 19:02:06.997303   86934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:02:06.997364   86934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:02:06.997436   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:07.015165   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:07.117424   86934 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:02:07.121129   86934 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:02:07.121172   86934 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:02:07.121187   86934 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 19:02:07.121231   86934 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 19:02:07.121302   86934 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 19:02:07.121313   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /etc/ssl/certs/148802.pem
	I1009 19:02:07.121398   86934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:02:07.128962   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 19:02:07.146444   86934 start.go:296] duration metric: took 149.135002ms for postStartSetup
	I1009 19:02:07.146528   86934 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:02:07.146561   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:07.164604   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:07.263216   86934 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:02:07.267755   86934 fix.go:56] duration metric: took 4.706900009s for fixHost
	I1009 19:02:07.267794   86934 start.go:83] releasing machines lock for "ha-608611", held for 4.706943222s
	I1009 19:02:07.267857   86934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-608611
	I1009 19:02:07.284443   86934 ssh_runner.go:195] Run: cat /version.json
	I1009 19:02:07.284488   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:07.284518   86934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:02:07.284564   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:07.302426   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:07.302797   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:07.452812   86934 ssh_runner.go:195] Run: systemctl --version
	I1009 19:02:07.459227   86934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:02:07.492322   86934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:02:07.496837   86934 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:02:07.496893   86934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:02:07.504414   86934 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:02:07.504435   86934 start.go:495] detecting cgroup driver to use...
	I1009 19:02:07.504461   86934 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:02:07.504497   86934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:02:07.518639   86934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:02:07.530028   86934 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:02:07.530080   86934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:02:07.543210   86934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:02:07.554574   86934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:02:07.631689   86934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:02:07.710043   86934 docker.go:234] disabling docker service ...
	I1009 19:02:07.710103   86934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:02:07.723929   86934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:02:07.736312   86934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:02:07.813951   86934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:02:07.891501   86934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:02:07.903630   86934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:02:07.917404   86934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:02:07.917468   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.926188   86934 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:02:07.926260   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.935124   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.943686   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.952342   86934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:02:07.960386   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.969265   86934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.977652   86934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:02:07.986892   86934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:02:07.994317   86934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:02:08.001853   86934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:02:08.079819   86934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:02:08.184066   86934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:02:08.184131   86934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:02:08.188032   86934 start.go:563] Will wait 60s for crictl version
	I1009 19:02:08.188080   86934 ssh_runner.go:195] Run: which crictl
	I1009 19:02:08.191568   86934 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:02:08.215064   86934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:02:08.215130   86934 ssh_runner.go:195] Run: crio --version
	I1009 19:02:08.242668   86934 ssh_runner.go:195] Run: crio --version
	I1009 19:02:08.272310   86934 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:02:08.273867   86934 cli_runner.go:164] Run: docker network inspect ha-608611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:02:08.291028   86934 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 19:02:08.295020   86934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:02:08.304927   86934 kubeadm.go:883] updating cluster {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:02:08.305037   86934 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:02:08.305076   86934 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:02:08.334586   86934 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:02:08.334605   86934 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:02:08.334646   86934 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:02:08.359864   86934 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:02:08.359884   86934 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:02:08.359891   86934 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1009 19:02:08.359982   86934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-608611 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:02:08.360041   86934 ssh_runner.go:195] Run: crio config
	I1009 19:02:08.403513   86934 cni.go:84] Creating CNI manager for ""
	I1009 19:02:08.403536   86934 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:02:08.403553   86934 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:02:08.403581   86934 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-608611 NodeName:ha-608611 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:02:08.403758   86934 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-608611"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:02:08.403826   86934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:02:08.411830   86934 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 19:02:08.411894   86934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:02:08.419468   86934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1009 19:02:08.432379   86934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:02:08.445216   86934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1009 19:02:08.457891   86934 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:02:08.461609   86934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:02:08.471627   86934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:02:08.548747   86934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:02:08.570439   86934 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611 for IP: 192.168.49.2
	I1009 19:02:08.570462   86934 certs.go:195] generating shared ca certs ...
	I1009 19:02:08.570494   86934 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:02:08.570644   86934 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 19:02:08.570699   86934 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 19:02:08.570711   86934 certs.go:257] generating profile certs ...
	I1009 19:02:08.570809   86934 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key
	I1009 19:02:08.570886   86934 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key.71ac3d0a
	I1009 19:02:08.570937   86934 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key
	I1009 19:02:08.570950   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:02:08.570974   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:02:08.570990   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:02:08.571008   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:02:08.571026   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:02:08.571045   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:02:08.571062   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:02:08.571080   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:02:08.571169   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 19:02:08.571210   86934 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 19:02:08.571224   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:02:08.571259   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 19:02:08.571305   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:02:08.571336   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 19:02:08.571392   86934 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 19:02:08.571429   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> /usr/share/ca-certificates/148802.pem
	I1009 19:02:08.571452   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:02:08.571470   86934 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem -> /usr/share/ca-certificates/14880.pem
	I1009 19:02:08.572252   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:02:08.590519   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:02:08.608788   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:02:08.628771   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:02:08.652296   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1009 19:02:08.669442   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 19:02:08.686413   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:02:08.702970   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:02:08.719872   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 19:02:08.736350   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:02:08.753020   86934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 19:02:08.770756   86934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:02:08.782846   86934 ssh_runner.go:195] Run: openssl version
	I1009 19:02:08.788680   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:02:08.796773   86934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:02:08.800287   86934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:02:08.800342   86934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:02:08.834331   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:02:08.842576   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 19:02:08.850707   86934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 19:02:08.854375   86934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 19:02:08.854417   86934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 19:02:08.888132   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 19:02:08.896190   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 19:02:08.904560   86934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 19:02:08.908107   86934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 19:02:08.908167   86934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 19:02:08.941616   86934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:02:08.949683   86934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:02:08.953888   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:02:08.988843   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:02:09.022384   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:02:09.055785   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:02:09.100654   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:02:09.138816   86934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:02:09.175373   86934 kubeadm.go:400] StartCluster: {Name:ha-608611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-608611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:02:09.175553   86934 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:02:09.175626   86934 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:02:09.203282   86934 cri.go:89] found id: ""
	I1009 19:02:09.203337   86934 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:02:09.211170   86934 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1009 19:02:09.211189   86934 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1009 19:02:09.211233   86934 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 19:02:09.218525   86934 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:02:09.218879   86934 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-608611" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 19:02:09.218998   86934 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-11374/kubeconfig needs updating (will repair): [kubeconfig missing "ha-608611" cluster setting kubeconfig missing "ha-608611" context setting]
	I1009 19:02:09.219307   86934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/kubeconfig: {Name:mke7bf8fc0811179129dfd61e3a963860adf8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:02:09.219795   86934 kapi.go:59] client config for ha-608611: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:02:09.220220   86934 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:02:09.220236   86934 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1009 19:02:09.220244   86934 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1009 19:02:09.220251   86934 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:02:09.220258   86934 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1009 19:02:09.220304   86934 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1009 19:02:09.220587   86934 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 19:02:09.228184   86934 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1009 19:02:09.228212   86934 kubeadm.go:601] duration metric: took 17.018594ms to restartPrimaryControlPlane
	I1009 19:02:09.228221   86934 kubeadm.go:402] duration metric: took 52.859442ms to StartCluster
	I1009 19:02:09.228235   86934 settings.go:142] acquiring lock: {Name:mke1fc24bd3c282bdce5b5999d4611ed242ead0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:02:09.228289   86934 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 19:02:09.228747   86934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/kubeconfig: {Name:mke7bf8fc0811179129dfd61e3a963860adf8201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:02:09.228944   86934 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:02:09.229006   86934 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:02:09.229112   86934 addons.go:69] Setting storage-provisioner=true in profile "ha-608611"
	I1009 19:02:09.229129   86934 config.go:182] Loaded profile config "ha-608611": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:02:09.229158   86934 addons.go:69] Setting default-storageclass=true in profile "ha-608611"
	I1009 19:02:09.229194   86934 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-608611"
	I1009 19:02:09.229132   86934 addons.go:238] Setting addon storage-provisioner=true in "ha-608611"
	I1009 19:02:09.229294   86934 host.go:66] Checking if "ha-608611" exists ...
	I1009 19:02:09.229535   86934 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 19:02:09.229746   86934 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 19:02:09.232398   86934 out.go:179] * Verifying Kubernetes components...
	I1009 19:02:09.234182   86934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:02:09.249828   86934 kapi.go:59] client config for ha-608611: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/profiles/ha-608611/client.key", CAFile:"/home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ce0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:02:09.250212   86934 addons.go:238] Setting addon default-storageclass=true in "ha-608611"
	I1009 19:02:09.250254   86934 host.go:66] Checking if "ha-608611" exists ...
	I1009 19:02:09.250729   86934 cli_runner.go:164] Run: docker container inspect ha-608611 --format={{.State.Status}}
	I1009 19:02:09.253666   86934 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:02:09.255198   86934 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:02:09.255220   86934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:02:09.255295   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:09.279913   86934 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:02:09.279935   86934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:02:09.279997   86934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-608611
	I1009 19:02:09.280244   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:09.298795   86934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/ha-608611/id_rsa Username:docker}
	I1009 19:02:09.340817   86934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:02:09.353728   86934 node_ready.go:35] waiting up to 6m0s for node "ha-608611" to be "Ready" ...
	I1009 19:02:09.392883   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:02:09.410568   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:09.451098   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.451133   86934 retry.go:31] will retry after 367.251438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:09.467582   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.467614   86934 retry.go:31] will retry after 202.583149ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.671071   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:09.728118   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.728165   86934 retry.go:31] will retry after 532.603205ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.819359   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:09.870710   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:09.870743   86934 retry.go:31] will retry after 279.776339ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.151303   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:10.203393   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.203423   86934 retry.go:31] will retry after 347.914412ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.261624   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:10.312099   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.312161   86934 retry.go:31] will retry after 754.410355ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.551883   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:10.604202   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:10.604236   86934 retry.go:31] will retry after 610.586718ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:11.067261   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:11.118580   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:11.118609   86934 retry.go:31] will retry after 814.916965ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:11.215892   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:11.267928   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:11.267972   86934 retry.go:31] will retry after 1.45438082s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:11.354562   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:11.934655   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:11.986484   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:11.986513   86934 retry.go:31] will retry after 1.124124769s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:12.723181   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:12.774656   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:12.774689   86934 retry.go:31] will retry after 1.232500279s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:13.111665   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:13.165517   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:13.165552   86934 retry.go:31] will retry after 2.16641371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:13.355245   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:14.007705   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:14.059964   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:14.059992   86934 retry.go:31] will retry after 3.058954256s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:15.332271   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:15.386449   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:15.386473   86934 retry.go:31] will retry after 3.386344457s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:15.854462   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:17.120044   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:17.172191   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:17.172228   86934 retry.go:31] will retry after 5.108857909s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:17.855169   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:18.773686   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:18.825043   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:18.825075   86934 retry.go:31] will retry after 4.328736912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:20.354784   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:22.282235   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:22.336593   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:22.336620   86934 retry.go:31] will retry after 8.469274029s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:22.355192   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:23.154808   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:23.207154   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:23.207192   86934 retry.go:31] will retry after 9.59352501s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:24.854514   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:27.355255   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:29.854449   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:30.806123   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:30.858604   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:30.858637   86934 retry.go:31] will retry after 13.297733582s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:32.354331   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:32.800848   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:32.854427   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:32.854451   86934 retry.go:31] will retry after 8.328873063s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:34.354417   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:36.354493   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:38.354571   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:40.354643   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:41.184043   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:41.237661   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:41.237694   86934 retry.go:31] will retry after 10.702907746s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:42.854628   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:44.156959   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:02:44.208755   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:44.208790   86934 retry.go:31] will retry after 18.065677643s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:45.354394   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:47.854450   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:49.854575   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:02:51.941580   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:02:51.995763   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:02:51.995796   86934 retry.go:31] will retry after 22.859549113s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:02:52.354574   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:54.854455   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:57.354280   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:02:59.354606   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:01.854286   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:03:02.274776   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:03:02.329455   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:03:02.329481   86934 retry.go:31] will retry after 18.531804756s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:03:03.854398   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:05.855306   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:08.354544   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:10.354642   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:12.854650   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:03:14.855487   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:03:14.910832   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:03:14.910866   86934 retry.go:31] will retry after 23.992226966s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:03:15.354856   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:17.854777   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:19.855067   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:03:20.862242   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:03:20.916094   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1009 19:03:20.916120   86934 retry.go:31] will retry after 48.100773528s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:03:22.355103   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:24.355298   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:26.855213   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:29.354698   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:31.854367   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:33.854849   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:36.354516   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:38.354590   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:03:38.903767   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1009 19:03:38.956838   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:03:38.956956   86934 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1009 19:03:40.854352   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:42.854763   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:44.855321   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:47.354581   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:49.355061   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:51.854592   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:53.855020   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:56.354334   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:03:58.354436   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:00.355133   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:02.355211   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:04.854653   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:06.854735   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:04:09.017880   86934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1009 19:04:09.070622   86934 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1009 19:04:09.070759   86934 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1009 19:04:09.073837   86934 out.go:179] * Enabled addons: 
	I1009 19:04:09.075208   86934 addons.go:514] duration metric: took 1m59.846203175s for enable addons: enabled=[]
	W1009 19:04:09.354738   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:11.854382   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:13.854761   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:15.855263   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:18.354436   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:20.354680   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:22.854757   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:25.354358   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:27.354618   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:29.355201   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:31.854584   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:33.855216   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:36.354515   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:38.355047   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:40.854574   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:42.854919   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:45.354432   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:47.354739   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:49.854455   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:51.854700   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:54.354542   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:56.354729   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:04:58.355340   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:00.854996   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:03.354655   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:05.354894   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:07.854625   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:09.854988   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:12.354612   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:14.355191   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:16.854672   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:18.855119   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:21.354471   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:23.355067   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:25.854706   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:28.354363   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:30.354952   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:32.854719   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:34.855304   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:37.354583   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:39.355134   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:41.854603   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:44.354384   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:46.354675   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:48.355094   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:50.854601   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:52.854769   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:55.354452   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:57.354754   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:05:59.854434   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:01.854660   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:03.855216   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:06.354552   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:08.354978   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:10.854742   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:13.354448   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:15.854379   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:17.854464   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:19.854680   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:22.354465   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:24.354554   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:26.854391   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:28.854550   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:30.854630   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:33.354581   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:35.354615   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:37.854978   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:39.855076   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:41.855108   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:43.855311   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:46.355236   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:48.355325   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:50.854629   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:52.854776   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:55.354563   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:57.854716   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:06:59.854942   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:02.354877   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:04.355253   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:06.854673   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:08.855261   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:11.354618   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:13.355044   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:15.854451   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:17.854909   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:20.354494   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:22.354776   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:24.854505   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:26.854756   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:29.354667   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:31.355071   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:33.854718   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:35.855122   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:38.354669   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:40.355263   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:42.854610   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:44.855295   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:47.354752   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:49.854638   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:51.855251   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:54.354792   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:56.854535   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:07:58.855239   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:08:01.354815   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:08:03.854572   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:08:05.854724   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	W1009 19:08:08.354483   86934 node_ready.go:55] error getting node "ha-608611" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-608611": dial tcp 192.168.49.2:8443: connect: connection refused
	I1009 19:08:09.353866   86934 node_ready.go:38] duration metric: took 6m0.000084484s for node "ha-608611" to be "Ready" ...
	I1009 19:08:09.356453   86934 out.go:203] 
	W1009 19:08:09.357971   86934 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1009 19:08:09.357991   86934 out.go:285] * 
	W1009 19:08:09.359976   86934 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:08:09.361285   86934 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:08:11 ha-608611 crio[522]: time="2025-10-09T19:08:11.677449723Z" level=info msg="createCtr: removing container 9742a44d54deb222ac36a6152d76a2c04049f27372dfef8c32939ffd0c036394" id=5df9dc29-f464-4ff7-9fc1-8062df79b672 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:11 ha-608611 crio[522]: time="2025-10-09T19:08:11.677490223Z" level=info msg="createCtr: deleting container 9742a44d54deb222ac36a6152d76a2c04049f27372dfef8c32939ffd0c036394 from storage" id=5df9dc29-f464-4ff7-9fc1-8062df79b672 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:11 ha-608611 crio[522]: time="2025-10-09T19:08:11.679643261Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-608611_kube-system_8c1c5aee1432fcfd0e6519753fb0d668_0" id=5df9dc29-f464-4ff7-9fc1-8062df79b672 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:12 ha-608611 crio[522]: time="2025-10-09T19:08:12.654162481Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=96378813-22fb-43ac-bd57-1d04896985a5 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:08:12 ha-608611 crio[522]: time="2025-10-09T19:08:12.655066988Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=87692678-10bd-4e0c-a8d1-16e95cdb6c3c name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:08:12 ha-608611 crio[522]: time="2025-10-09T19:08:12.656021697Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-608611/kube-scheduler" id=cd1e22c2-c7cb-4193-93ca-13cba4f371b2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:12 ha-608611 crio[522]: time="2025-10-09T19:08:12.656279613Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:08:12 ha-608611 crio[522]: time="2025-10-09T19:08:12.659482213Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:08:12 ha-608611 crio[522]: time="2025-10-09T19:08:12.659893836Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:08:12 ha-608611 crio[522]: time="2025-10-09T19:08:12.676193549Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=cd1e22c2-c7cb-4193-93ca-13cba4f371b2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:12 ha-608611 crio[522]: time="2025-10-09T19:08:12.677801834Z" level=info msg="createCtr: deleting container ID 97744c9a435e1e305f461cf739ebe846fcadc48a6b97c7cb9d66fd2684e35caf from idIndex" id=cd1e22c2-c7cb-4193-93ca-13cba4f371b2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:12 ha-608611 crio[522]: time="2025-10-09T19:08:12.67783707Z" level=info msg="createCtr: removing container 97744c9a435e1e305f461cf739ebe846fcadc48a6b97c7cb9d66fd2684e35caf" id=cd1e22c2-c7cb-4193-93ca-13cba4f371b2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:12 ha-608611 crio[522]: time="2025-10-09T19:08:12.677867121Z" level=info msg="createCtr: deleting container 97744c9a435e1e305f461cf739ebe846fcadc48a6b97c7cb9d66fd2684e35caf from storage" id=cd1e22c2-c7cb-4193-93ca-13cba4f371b2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:12 ha-608611 crio[522]: time="2025-10-09T19:08:12.680178315Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-608611_kube-system_aa829d6ea417a48ecaa6f5cad3254d94_0" id=cd1e22c2-c7cb-4193-93ca-13cba4f371b2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:14 ha-608611 crio[522]: time="2025-10-09T19:08:14.653984915Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=2a21e7e0-8334-46a5-9bc6-6264a4e49df2 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:08:14 ha-608611 crio[522]: time="2025-10-09T19:08:14.654933035Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=e4e4d478-f884-4c69-9b3c-fc2f4ef3812e name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:08:14 ha-608611 crio[522]: time="2025-10-09T19:08:14.655851525Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-608611/kube-controller-manager" id=3c67646a-0bdf-4d1e-9f9e-8f0ab7ed546e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:14 ha-608611 crio[522]: time="2025-10-09T19:08:14.656152023Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:08:14 ha-608611 crio[522]: time="2025-10-09T19:08:14.660518291Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:08:14 ha-608611 crio[522]: time="2025-10-09T19:08:14.660942729Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:08:14 ha-608611 crio[522]: time="2025-10-09T19:08:14.678211319Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=3c67646a-0bdf-4d1e-9f9e-8f0ab7ed546e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:14 ha-608611 crio[522]: time="2025-10-09T19:08:14.679620001Z" level=info msg="createCtr: deleting container ID d5532d1ff4c39e71cc4220919a2edabd538e85cd409aa62a6e64325a31f675a7 from idIndex" id=3c67646a-0bdf-4d1e-9f9e-8f0ab7ed546e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:14 ha-608611 crio[522]: time="2025-10-09T19:08:14.679661863Z" level=info msg="createCtr: removing container d5532d1ff4c39e71cc4220919a2edabd538e85cd409aa62a6e64325a31f675a7" id=3c67646a-0bdf-4d1e-9f9e-8f0ab7ed546e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:14 ha-608611 crio[522]: time="2025-10-09T19:08:14.679692513Z" level=info msg="createCtr: deleting container d5532d1ff4c39e71cc4220919a2edabd538e85cd409aa62a6e64325a31f675a7 from storage" id=3c67646a-0bdf-4d1e-9f9e-8f0ab7ed546e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:08:14 ha-608611 crio[522]: time="2025-10-09T19:08:14.682088911Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-608611_kube-system_cc9d45d79042caf53449ab6317965aad_0" id=3c67646a-0bdf-4d1e-9f9e-8f0ab7ed546e name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:08:14.990341    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:08:14.990956    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:08:14.992612    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:08:14.993102    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:08:14.994646    2535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:08:15 up  1:50,  0 user,  load average: 0.08, 0.15, 0.15
	Linux ha-608611 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:08:11 ha-608611 kubelet[671]: E1009 19:08:11.463487     671 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-608611"
	Oct 09 19:08:11 ha-608611 kubelet[671]: E1009 19:08:11.654012     671 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 19:08:11 ha-608611 kubelet[671]: E1009 19:08:11.679971     671 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:08:11 ha-608611 kubelet[671]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:08:11 ha-608611 kubelet[671]:  > podSandboxID="ac516349e9b506d388b69dc76eeb2ab388dd4861bbc6be0177da37d2c5d29a10"
	Oct 09 19:08:11 ha-608611 kubelet[671]: E1009 19:08:11.680115     671 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:08:11 ha-608611 kubelet[671]:         container kube-apiserver start failed in pod kube-apiserver-ha-608611_kube-system(8c1c5aee1432fcfd0e6519753fb0d668): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:08:11 ha-608611 kubelet[671]:  > logger="UnhandledError"
	Oct 09 19:08:11 ha-608611 kubelet[671]: E1009 19:08:11.680178     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-608611" podUID="8c1c5aee1432fcfd0e6519753fb0d668"
	Oct 09 19:08:12 ha-608611 kubelet[671]: E1009 19:08:12.653706     671 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 19:08:12 ha-608611 kubelet[671]: E1009 19:08:12.680457     671 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:08:12 ha-608611 kubelet[671]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:08:12 ha-608611 kubelet[671]:  > podSandboxID="2815d9108b060ab5e9615f041c0109d9325e4b92666a1d711f35f61789cf6add"
	Oct 09 19:08:12 ha-608611 kubelet[671]: E1009 19:08:12.680578     671 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:08:12 ha-608611 kubelet[671]:         container kube-scheduler start failed in pod kube-scheduler-ha-608611_kube-system(aa829d6ea417a48ecaa6f5cad3254d94): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:08:12 ha-608611 kubelet[671]:  > logger="UnhandledError"
	Oct 09 19:08:12 ha-608611 kubelet[671]: E1009 19:08:12.680618     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-608611" podUID="aa829d6ea417a48ecaa6f5cad3254d94"
	Oct 09 19:08:14 ha-608611 kubelet[671]: E1009 19:08:14.653580     671 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-608611\" not found" node="ha-608611"
	Oct 09 19:08:14 ha-608611 kubelet[671]: E1009 19:08:14.682401     671 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:08:14 ha-608611 kubelet[671]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:08:14 ha-608611 kubelet[671]:  > podSandboxID="e6d213121dff8e12e33b6dfffb3c6dee8f92a52bbf3378d51bab179d2c3d906d"
	Oct 09 19:08:14 ha-608611 kubelet[671]: E1009 19:08:14.682496     671 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:08:14 ha-608611 kubelet[671]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-608611_kube-system(cc9d45d79042caf53449ab6317965aad): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:08:14 ha-608611 kubelet[671]:  > logger="UnhandledError"
	Oct 09 19:08:14 ha-608611 kubelet[671]: E1009 19:08:14.682523     671 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-608611" podUID="cc9d45d79042caf53449ab6317965aad"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-608611 -n ha-608611: exit status 2 (293.32845ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-608611" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.59s)

                                                
                                    
x
+
TestJSONOutput/start/Command (497.14s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-073351 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1009 19:10:34.610216   14880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:15:34.615572   14880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-073351 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: exit status 80 (8m17.140721198s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b4546466-cc4c-498d-8269-2cb7e476b75f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-073351] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b01361b7-a9a3-4adf-b445-15a2bbe9666b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21139"}}
	{"specversion":"1.0","id":"73191f70-777b-460c-8be0-e0efb7defb9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f9e3bf5c-5c55-49d6-8bbe-b970f559d79f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig"}}
	{"specversion":"1.0","id":"be7a9538-b694-4b24-bd84-61b4a6a7649e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube"}}
	{"specversion":"1.0","id":"78a0261c-f1fe-4101-b87b-ab10d01595b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"85c4ad15-70eb-409a-bf1a-fff648c10811","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ccd2fbc0-f089-4ae5-bd1c-cbb8bec06d23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3883ca2e-b2af-4584-bad7-c1687da27d00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"8264efde-ddc7-40b6-b9f0-def7b2dec28a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-073351\" primary control-plane node in \"json-output-073351\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e89383b1-93b1-4664-88be-7bcf970e4ef8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1759745255-21703 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"c4849158-3b07-437c-8e95-0ed901e9238b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"50803e0f-26f0-4e26-8ed4-e0ef6e458088","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"11","message":"Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...","name":"Preparing Kubernetes","totalsteps":"19"}}
	{"specversion":"1.0","id":"9740b3f4-9f51-47b7-87b1-99dcae128953","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"12","message":"Generating certificates and keys ...","name":"Generating certificates","totalsteps":"19"}}
	{"specversion":"1.0","id":"b87555fe-06d8-4410-8d81-b6e67fcc1fb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"13","message":"Booting up control plane ...","name":"Booting control plane","totalsteps":"19"}}
	{"specversion":"1.0","id":"107df4da-a6db-4846-afe7-96ea7659768f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Pri
nting the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\
n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [json-output-073351 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [json-output-073351 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writi
ng \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the ku
belet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001800995s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-scheduler is not healthy after 4m0.000105681s\n[control-plane-check] kube-apiserver is not healthy after 4m0.000276895s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.00040031s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using y
our preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at http
s://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"}}
	{"specversion":"1.0","id":"6d8a15b7-b6f2-4bf0-8ff0-76119aeeb424","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"12","message":"Generating certificates and keys ...","name":"Generating certificates","totalsteps":"19"}}
	{"specversion":"1.0","id":"678c4e5a-5de0-4bf2-bcbd-96367e6c3cf1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"13","message":"Booting up control plane ...","name":"Booting control plane","totalsteps":"19"}}
	{"specversion":"1.0","id":"dbac4fbe-73ce-4a9e-a9a1-ab5b909123c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the outpu
t from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using
existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Using existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[
etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/health
z. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 500.939566ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-controller-manager is not healthy after 4m0.0010792s\n[control-plane-check] kube-apiserver is not healthy after 4m0.00124731s\n[control-plane-check] kube-scheduler is not healthy after 4m0.001388491s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v paus
e'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp
127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"}}
	{"specversion":"1.0","id":"5a4ba66b-ca13-4bef-865f-0605383aa36f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system v
erification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/va
r/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Using existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing
\"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy ku
belet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 500.939566ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-controller-manager is not healthy after 4m0.0010792s\n[control-plane-check] kube-apiserver is not healthy after 4m0.00124731s\n[control-plane-check] kube-scheduler is not healthy after 4m0.001388491s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio
.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"htt
ps://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher","name":"GUEST_START","url":""}}
	{"specversion":"1.0","id":"500220fe-799e-41eb-abcf-5aa0440694b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 start -p json-output-073351 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio": exit status 80
--- FAIL: TestJSONOutput/start/Command (497.14s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
json_output_test.go:114: step 12 has already been assigned to another step:
Generating certificates and keys ...
Cannot use for:
Generating certificates and keys ...
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: b4546466-cc4c-498d-8269-2cb7e476b75f
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-073351] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: b01361b7-a9a3-4adf-b445-15a2bbe9666b
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=21139"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 73191f70-777b-460c-8be0-e0efb7defb9e
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: f9e3bf5c-5c55-49d6-8bbe-b970f559d79f
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: be7a9538-b694-4b24-bd84-61b4a6a7649e
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 78a0261c-f1fe-4101-b87b-ab10d01595b3
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-linux-amd64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 85c4ad15-70eb-409a-bf1a-fff648c10811
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: ccd2fbc0-f089-4ae5-bd1c-cbb8bec06d23
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 3883ca2e-b2af-4584-bad7-c1687da27d00
datacontenttype: application/json
Data,
{
"message": "Using Docker driver with root privileges"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 8264efde-ddc7-40b6-b9f0-def7b2dec28a
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-073351\" primary control-plane node in \"json-output-073351\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: e89383b1-93b1-4664-88be-7bcf970e4ef8
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image v0.0.48-1759745255-21703 ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: c4849158-3b07-437c-8e95-0ed901e9238b
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=3072MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 50803e0f-26f0-4e26-8ed4-e0ef6e458088
datacontenttype: application/json
Data,
{
"currentstep": "11",
"message": "Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...",
"name": "Preparing Kubernetes",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 9740b3f4-9f51-47b7-87b1-99dcae128953
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: b87555fe-06d8-4410-8d81-b6e67fcc1fb4
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 107df4da-a6db-4846-afe7-96ea7659768f
datacontenttype: application/json
Data,
{
"message": "initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGR
OUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[c
erts] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [json-output-073351 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [json-output-073351 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing
\"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kub
elet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001800995s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-scheduler is not healthy after 4m0.000105681s\n[control-plane-check] kube-apiserver is not healthy after 4m0.000276895s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.00040031s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/cri
o.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257
/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 6d8a15b7-b6f2-4bf0-8ff0-76119aeeb424
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 678c4e5a-5de0-4bf2-bcbd-96367e6c3cf1
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: dbac4fbe-73ce-4a9e-a9a1-ab5b909123c3
datacontenttype: application/json
Data,
{
"message": "Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[
0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] U
sing existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating stati
c Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 500.939566ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[
control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-controller-manager is not healthy after 4m0.0010792s\n[control-plane-check] kube-apiserver is not healthy after 4m0.00124731s\n[control-plane-check] kube-scheduler is not healthy after 4m0.001388491s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNI
NG SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 5a4ba66b-ca13-4bef-865f-0605383aa36f
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "80",
"issues": "",
"message": "failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m
: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Usi
ng existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static
Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 500.939566ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[co
ntrol-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-controller-manager is not healthy after 4m0.0010792s\n[control-plane-check] kube-apiserver is not healthy after 4m0.00124731s\n[control-plane-check] kube-scheduler is not healthy after 4m0.001388491s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING
SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher",
"name": "GUEST_START",
"url": ""
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 500220fe-799e-41eb-abcf-5aa0440694b4
datacontenttype: application/json
Data,
{
"message": "╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│                                                                                           │\n╰────────────────────────────────────────
───────────────────────────────────────────────────╯"
}
]
--- FAIL: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
json_output_test.go:144: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: b4546466-cc4c-498d-8269-2cb7e476b75f
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-073351] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: b01361b7-a9a3-4adf-b445-15a2bbe9666b
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=21139"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 73191f70-777b-460c-8be0-e0efb7defb9e
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: f9e3bf5c-5c55-49d6-8bbe-b970f559d79f
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: be7a9538-b694-4b24-bd84-61b4a6a7649e
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 78a0261c-f1fe-4101-b87b-ab10d01595b3
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-linux-amd64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 85c4ad15-70eb-409a-bf1a-fff648c10811
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: ccd2fbc0-f089-4ae5-bd1c-cbb8bec06d23
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 3883ca2e-b2af-4584-bad7-c1687da27d00
datacontenttype: application/json
Data,
{
"message": "Using Docker driver with root privileges"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 8264efde-ddc7-40b6-b9f0-def7b2dec28a
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-073351\" primary control-plane node in \"json-output-073351\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: e89383b1-93b1-4664-88be-7bcf970e4ef8
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image v0.0.48-1759745255-21703 ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: c4849158-3b07-437c-8e95-0ed901e9238b
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=3072MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 50803e0f-26f0-4e26-8ed4-e0ef6e458088
datacontenttype: application/json
Data,
{
"currentstep": "11",
"message": "Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...",
"name": "Preparing Kubernetes",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 9740b3f4-9f51-47b7-87b1-99dcae128953
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: b87555fe-06d8-4410-8d81-b6e67fcc1fb4
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 107df4da-a6db-4846-afe7-96ea7659768f
datacontenttype: application/json
Data,
{
"message": "initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGR
OUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[c
erts] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [json-output-073351 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [json-output-073351 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing
\"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kub
elet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001800995s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-scheduler is not healthy after 4m0.000105681s\n[control-plane-check] kube-apiserver is not healthy after 4m0.000276895s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.00040031s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/cri
o.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257
/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 6d8a15b7-b6f2-4bf0-8ff0-76119aeeb424
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 678c4e5a-5de0-4bf2-bcbd-96367e6c3cf1
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: dbac4fbe-73ce-4a9e-a9a1-ab5b909123c3
datacontenttype: application/json
Data,
{
"message": "Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[
0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] U
sing existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating stati
c Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 500.939566ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[
control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-controller-manager is not healthy after 4m0.0010792s\n[control-plane-check] kube-apiserver is not healthy after 4m0.00124731s\n[control-plane-check] kube-scheduler is not healthy after 4m0.001388491s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNI
NG SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 5a4ba66b-ca13-4bef-865f-0605383aa36f
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "80",
"issues": "",
"message": "failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m
: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Usi
ng existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static
Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 500.939566ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[co
ntrol-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-controller-manager is not healthy after 4m0.0010792s\n[control-plane-check] kube-apiserver is not healthy after 4m0.00124731s\n[control-plane-check] kube-scheduler is not healthy after 4m0.001388491s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING
SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher",
"name": "GUEST_START",
"url": ""
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 500220fe-799e-41eb-abcf-5aa0440694b4
datacontenttype: application/json
Data,
{
"message": "╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│                                                                                           │\n╰────────────────────────────────────────
───────────────────────────────────────────────────╯"
}
]
--- FAIL: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestMinikubeProfile (500.52s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-644179 --driver=docker  --container-runtime=crio
E1009 19:20:34.618322   14880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:25:34.618111   14880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p first-644179 --driver=docker  --container-runtime=crio: exit status 80 (8m17.116331681s)

                                                
                                                
-- stdout --
	* [first-644179] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "first-644179" primary control-plane node in "first-644179" cluster
	* Pulling base image v0.0.48-1759745255-21703 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [first-644179 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [first-644179 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.030792ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000233414s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000544056s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446057s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.80567ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000294866s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000692889s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000623533s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.80567ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000294866s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000692889s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000623533s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-linux-amd64 start -p first-644179 --driver=docker  --container-runtime=crio": exit status 80
panic.go:636: *** TestMinikubeProfile FAILED at 2025-10-09 19:27:12.187248072 +0000 UTC m=+5462.104075176
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMinikubeProfile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMinikubeProfile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect second-646950
helpers_test.go:239: (dbg) Non-zero exit: docker inspect second-646950: exit status 1 (28.79482ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: second-646950

                                                
                                                
** /stderr **
helpers_test.go:241: failed to get docker inspect: exit status 1
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p second-646950 -n second-646950
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p second-646950 -n second-646950: exit status 85 (53.073485ms)

                                                
                                                
-- stdout --
	* Profile "second-646950" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-646950"

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 85 (may be ok)
helpers_test.go:249: "second-646950" host is not running, skipping log retrieval (state="* Profile \"second-646950\" not found. Run \"minikube profile list\" to view all profiles.")
helpers_test.go:175: Cleaning up "second-646950" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-646950
panic.go:636: *** TestMinikubeProfile FAILED at 2025-10-09 19:27:12.415089365 +0000 UTC m=+5462.331916475
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMinikubeProfile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMinikubeProfile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect first-644179
helpers_test.go:243: (dbg) docker inspect first-644179:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "98419857f41666d9c20917b3fb1d9103dac8f3e4e92791b1c1daf366600861be",
	        "Created": "2025-10-09T19:19:00.205196171Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 120574,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-09T19:19:00.242195967Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6fde2176fdc734a0b1cf5396bccb3dc7d4299b26808035c9aa3b16b26946dbd",
	        "ResolvConfPath": "/var/lib/docker/containers/98419857f41666d9c20917b3fb1d9103dac8f3e4e92791b1c1daf366600861be/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/98419857f41666d9c20917b3fb1d9103dac8f3e4e92791b1c1daf366600861be/hostname",
	        "HostsPath": "/var/lib/docker/containers/98419857f41666d9c20917b3fb1d9103dac8f3e4e92791b1c1daf366600861be/hosts",
	        "LogPath": "/var/lib/docker/containers/98419857f41666d9c20917b3fb1d9103dac8f3e4e92791b1c1daf366600861be/98419857f41666d9c20917b3fb1d9103dac8f3e4e92791b1c1daf366600861be-json.log",
	        "Name": "/first-644179",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "first-644179:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "first-644179",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 8388608000,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 16777216000,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "98419857f41666d9c20917b3fb1d9103dac8f3e4e92791b1c1daf366600861be",
	                "LowerDir": "/var/lib/docker/overlay2/b417aa3ce8ffe882731712c45f55e4602b4b06f7e1ed9ecf7675afffed64ab97-init/diff:/var/lib/docker/overlay2/dc8070f6271392f650f08ccfa9baf079a520b5f581f039e0219299389d88e1d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b417aa3ce8ffe882731712c45f55e4602b4b06f7e1ed9ecf7675afffed64ab97/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b417aa3ce8ffe882731712c45f55e4602b4b06f7e1ed9ecf7675afffed64ab97/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b417aa3ce8ffe882731712c45f55e4602b4b06f7e1ed9ecf7675afffed64ab97/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "first-644179",
	                "Source": "/var/lib/docker/volumes/first-644179/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "first-644179",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "first-644179",
	                "name.minikube.sigs.k8s.io": "first-644179",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fb8987f231edeb4d2dd6334d78b094be78eb75cbf8123e497233eb81868989aa",
	            "SandboxKey": "/var/run/docker/netns/fb8987f231ed",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32828"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32829"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32832"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32830"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32831"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "first-644179": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:3b:83:fc:4a:92",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7c31e9c8fe44516f652354096f2f2146ceb02c0cc2ff92b09884fb4ed2aefcc0",
	                    "EndpointID": "d523b9d8ed9ffcda5808a14044b4709468398b314e831fdd6b0c63f4dfd54b03",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "first-644179",
	                        "98419857f416"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p first-644179 -n first-644179
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p first-644179 -n first-644179: exit status 6 (291.098756ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:27:12.710986  125072 status.go:458] kubeconfig endpoint: get endpoint: "first-644179" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMinikubeProfile FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMinikubeProfile]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p first-644179 logs -n 25
helpers_test.go:260: TestMinikubeProfile logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬──────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                          ARGS                                                           │         PROFILE          │   USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼──────────┼─────────┼─────────────────────┼─────────────────────┤
	│ node    │ ha-608611 node delete m03 --alsologtostderr -v 5                                                                        │ ha-608611                │ jenkins  │ v1.37.0 │ 09 Oct 25 19:01 UTC │                     │
	│ stop    │ ha-608611 stop --alsologtostderr -v 5                                                                                   │ ha-608611                │ jenkins  │ v1.37.0 │ 09 Oct 25 19:02 UTC │ 09 Oct 25 19:02 UTC │
	│ start   │ ha-608611 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                            │ ha-608611                │ jenkins  │ v1.37.0 │ 09 Oct 25 19:02 UTC │                     │
	│ node    │ ha-608611 node add --control-plane --alsologtostderr -v 5                                                               │ ha-608611                │ jenkins  │ v1.37.0 │ 09 Oct 25 19:08 UTC │                     │
	│ delete  │ -p ha-608611                                                                                                            │ ha-608611                │ jenkins  │ v1.37.0 │ 09 Oct 25 19:08 UTC │ 09 Oct 25 19:08 UTC │
	│ start   │ -p json-output-073351 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio │ json-output-073351       │ testUser │ v1.37.0 │ 09 Oct 25 19:08 UTC │                     │
	│ pause   │ -p json-output-073351 --output=json --user=testUser                                                                     │ json-output-073351       │ testUser │ v1.37.0 │ 09 Oct 25 19:16 UTC │ 09 Oct 25 19:16 UTC │
	│ unpause │ -p json-output-073351 --output=json --user=testUser                                                                     │ json-output-073351       │ testUser │ v1.37.0 │ 09 Oct 25 19:16 UTC │ 09 Oct 25 19:16 UTC │
	│ stop    │ -p json-output-073351 --output=json --user=testUser                                                                     │ json-output-073351       │ testUser │ v1.37.0 │ 09 Oct 25 19:16 UTC │ 09 Oct 25 19:16 UTC │
	│ delete  │ -p json-output-073351                                                                                                   │ json-output-073351       │ jenkins  │ v1.37.0 │ 09 Oct 25 19:16 UTC │ 09 Oct 25 19:16 UTC │
	│ start   │ -p json-output-error-732537 --memory=3072 --output=json --wait=true --driver=fail                                       │ json-output-error-732537 │ jenkins  │ v1.37.0 │ 09 Oct 25 19:16 UTC │                     │
	│ delete  │ -p json-output-error-732537                                                                                             │ json-output-error-732537 │ jenkins  │ v1.37.0 │ 09 Oct 25 19:16 UTC │ 09 Oct 25 19:16 UTC │
	│ start   │ -p docker-network-027621 --network=                                                                                     │ docker-network-027621    │ jenkins  │ v1.37.0 │ 09 Oct 25 19:16 UTC │ 09 Oct 25 19:17 UTC │
	│ delete  │ -p docker-network-027621                                                                                                │ docker-network-027621    │ jenkins  │ v1.37.0 │ 09 Oct 25 19:17 UTC │ 09 Oct 25 19:17 UTC │
	│ start   │ -p docker-network-561631 --network=bridge                                                                               │ docker-network-561631    │ jenkins  │ v1.37.0 │ 09 Oct 25 19:17 UTC │ 09 Oct 25 19:17 UTC │
	│ delete  │ -p docker-network-561631                                                                                                │ docker-network-561631    │ jenkins  │ v1.37.0 │ 09 Oct 25 19:17 UTC │ 09 Oct 25 19:17 UTC │
	│ start   │ -p existing-network-230837 --network=existing-network                                                                   │ existing-network-230837  │ jenkins  │ v1.37.0 │ 09 Oct 25 19:17 UTC │ 09 Oct 25 19:17 UTC │
	│ delete  │ -p existing-network-230837                                                                                              │ existing-network-230837  │ jenkins  │ v1.37.0 │ 09 Oct 25 19:17 UTC │ 09 Oct 25 19:18 UTC │
	│ start   │ -p custom-subnet-323149 --subnet=192.168.60.0/24                                                                        │ custom-subnet-323149     │ jenkins  │ v1.37.0 │ 09 Oct 25 19:18 UTC │ 09 Oct 25 19:18 UTC │
	│ delete  │ -p custom-subnet-323149                                                                                                 │ custom-subnet-323149     │ jenkins  │ v1.37.0 │ 09 Oct 25 19:18 UTC │ 09 Oct 25 19:18 UTC │
	│ start   │ -p static-ip-090536 --static-ip=192.168.200.200                                                                         │ static-ip-090536         │ jenkins  │ v1.37.0 │ 09 Oct 25 19:18 UTC │ 09 Oct 25 19:18 UTC │
	│ ip      │ static-ip-090536 ip                                                                                                     │ static-ip-090536         │ jenkins  │ v1.37.0 │ 09 Oct 25 19:18 UTC │ 09 Oct 25 19:18 UTC │
	│ delete  │ -p static-ip-090536                                                                                                     │ static-ip-090536         │ jenkins  │ v1.37.0 │ 09 Oct 25 19:18 UTC │ 09 Oct 25 19:18 UTC │
	│ start   │ -p first-644179 --driver=docker  --container-runtime=crio                                                               │ first-644179             │ jenkins  │ v1.37.0 │ 09 Oct 25 19:18 UTC │                     │
	│ delete  │ -p second-646950                                                                                                        │ second-646950            │ jenkins  │ v1.37.0 │ 09 Oct 25 19:27 UTC │ 09 Oct 25 19:27 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴──────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 19:18:55
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:18:55.110937  120005 out.go:360] Setting OutFile to fd 1 ...
	I1009 19:18:55.111180  120005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:18:55.111183  120005 out.go:374] Setting ErrFile to fd 2...
	I1009 19:18:55.111187  120005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 19:18:55.111395  120005 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 19:18:55.111864  120005 out.go:368] Setting JSON to false
	I1009 19:18:55.112728  120005 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":7283,"bootTime":1760030252,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:18:55.112812  120005 start.go:141] virtualization: kvm guest
	I1009 19:18:55.115011  120005 out.go:179] * [first-644179] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 19:18:55.116497  120005 notify.go:220] Checking for updates...
	I1009 19:18:55.116537  120005 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 19:18:55.117803  120005 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:18:55.119256  120005 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 19:18:55.120544  120005 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 19:18:55.121622  120005 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:18:55.122875  120005 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:18:55.124208  120005 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 19:18:55.148085  120005 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 19:18:55.148232  120005 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:18:55.206824  120005 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:18:55.19628286 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:18:55.206921  120005 docker.go:318] overlay module found
	I1009 19:18:55.208759  120005 out.go:179] * Using the docker driver based on user configuration
	I1009 19:18:55.210039  120005 start.go:305] selected driver: docker
	I1009 19:18:55.210047  120005 start.go:925] validating driver "docker" against <nil>
	I1009 19:18:55.210058  120005 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:18:55.210179  120005 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:18:55.270512  120005 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-09 19:18:55.260666467 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:18:55.270643  120005 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 19:18:55.271125  120005 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1009 19:18:55.271297  120005 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 19:18:55.272978  120005 out.go:179] * Using Docker driver with root privileges
	I1009 19:18:55.274220  120005 cni.go:84] Creating CNI manager for ""
	I1009 19:18:55.274278  120005 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:18:55.274287  120005 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:18:55.274346  120005 start.go:349] cluster config:
	{Name:first-644179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-644179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:18:55.275696  120005 out.go:179] * Starting "first-644179" primary control-plane node in "first-644179" cluster
	I1009 19:18:55.276861  120005 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 19:18:55.277920  120005 out.go:179] * Pulling base image v0.0.48-1759745255-21703 ...
	I1009 19:18:55.278880  120005 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:18:55.278903  120005 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:18:55.278909  120005 cache.go:64] Caching tarball of preloaded images
	I1009 19:18:55.278980  120005 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 19:18:55.279024  120005 preload.go:238] Found /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:18:55.279031  120005 cache.go:67] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1009 19:18:55.279357  120005 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179/config.json ...
	I1009 19:18:55.279374  120005 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179/config.json: {Name:mk392ffb5ff260c361bc1db89582c1de6fe409ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:18:55.299356  120005 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon, skipping pull
	I1009 19:18:55.299366  120005 cache.go:157] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 exists in daemon, skipping load
	I1009 19:18:55.299382  120005 cache.go:242] Successfully downloaded all kic artifacts
	I1009 19:18:55.299407  120005 start.go:360] acquireMachinesLock for first-644179: {Name:mkee40e26d082e983d0b20abef868839fb3ed90f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:18:55.299491  120005 start.go:364] duration metric: took 73.121µs to acquireMachinesLock for "first-644179"
	I1009 19:18:55.299508  120005 start.go:93] Provisioning new machine with config: &{Name:first-644179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-644179 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:18:55.299562  120005 start.go:125] createHost starting for "" (driver="docker")
	I1009 19:18:55.301435  120005 out.go:252] * Creating docker container (CPUs=2, Memory=8000MB) ...
	I1009 19:18:55.301627  120005 start.go:159] libmachine.API.Create for "first-644179" (driver="docker")
	I1009 19:18:55.301650  120005 client.go:168] LocalClient.Create starting
	I1009 19:18:55.301727  120005 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem
	I1009 19:18:55.301752  120005 main.go:141] libmachine: Decoding PEM data...
	I1009 19:18:55.301762  120005 main.go:141] libmachine: Parsing certificate...
	I1009 19:18:55.301807  120005 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem
	I1009 19:18:55.301826  120005 main.go:141] libmachine: Decoding PEM data...
	I1009 19:18:55.301833  120005 main.go:141] libmachine: Parsing certificate...
	I1009 19:18:55.302188  120005 cli_runner.go:164] Run: docker network inspect first-644179 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 19:18:55.319175  120005 cli_runner.go:211] docker network inspect first-644179 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 19:18:55.319225  120005 network_create.go:284] running [docker network inspect first-644179] to gather additional debugging logs...
	I1009 19:18:55.319238  120005 cli_runner.go:164] Run: docker network inspect first-644179
	W1009 19:18:55.335792  120005 cli_runner.go:211] docker network inspect first-644179 returned with exit code 1
	I1009 19:18:55.335810  120005 network_create.go:287] error running [docker network inspect first-644179]: docker network inspect first-644179: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network first-644179 not found
	I1009 19:18:55.335820  120005 network_create.go:289] output of [docker network inspect first-644179]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network first-644179 not found
	
	** /stderr **
	I1009 19:18:55.335898  120005 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:18:55.352894  120005 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4fe555d5402b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:0d:2e:a6:f1:88} reservation:<nil>}
	I1009 19:18:55.353410  120005 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00162cc10}
	I1009 19:18:55.353439  120005 network_create.go:124] attempt to create docker network first-644179 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1009 19:18:55.353509  120005 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=first-644179 first-644179
	I1009 19:18:55.410212  120005 network_create.go:108] docker network first-644179 192.168.58.0/24 created
	I1009 19:18:55.410237  120005 kic.go:121] calculated static IP "192.168.58.2" for the "first-644179" container
	I1009 19:18:55.410312  120005 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 19:18:55.427818  120005 cli_runner.go:164] Run: docker volume create first-644179 --label name.minikube.sigs.k8s.io=first-644179 --label created_by.minikube.sigs.k8s.io=true
	I1009 19:18:55.446370  120005 oci.go:103] Successfully created a docker volume first-644179
	I1009 19:18:55.446440  120005 cli_runner.go:164] Run: docker run --rm --name first-644179-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=first-644179 --entrypoint /usr/bin/test -v first-644179:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -d /var/lib
	I1009 19:18:55.828732  120005 oci.go:107] Successfully prepared a docker volume first-644179
	I1009 19:18:55.828793  120005 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:18:55.828803  120005 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 19:18:55.828858  120005 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v first-644179:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 19:19:00.132841  120005 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v first-644179:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 -I lz4 -xf /preloaded.tar -C /extractDir: (4.303936649s)
	I1009 19:19:00.132862  120005 kic.go:203] duration metric: took 4.304057955s to extract preloaded images to volume ...
	W1009 19:19:00.132967  120005 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1009 19:19:00.132993  120005 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1009 19:19:00.133025  120005 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 19:19:00.188916  120005 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname first-644179 --name first-644179 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=first-644179 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=first-644179 --network first-644179 --ip 192.168.58.2 --volume first-644179:/var --security-opt apparmor=unconfined --memory=8000mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92
	I1009 19:19:00.458943  120005 cli_runner.go:164] Run: docker container inspect first-644179 --format={{.State.Running}}
	I1009 19:19:00.477220  120005 cli_runner.go:164] Run: docker container inspect first-644179 --format={{.State.Status}}
	I1009 19:19:00.496103  120005 cli_runner.go:164] Run: docker exec first-644179 stat /var/lib/dpkg/alternatives/iptables
	I1009 19:19:00.545237  120005 oci.go:144] the created container "first-644179" has a running status.
	I1009 19:19:00.545264  120005 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/first-644179/id_rsa...
	I1009 19:19:00.854885  120005 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-11374/.minikube/machines/first-644179/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 19:19:00.880719  120005 cli_runner.go:164] Run: docker container inspect first-644179 --format={{.State.Status}}
	I1009 19:19:00.899605  120005 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 19:19:00.899616  120005 kic_runner.go:114] Args: [docker exec --privileged first-644179 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 19:19:00.943312  120005 cli_runner.go:164] Run: docker container inspect first-644179 --format={{.State.Status}}
	I1009 19:19:00.961634  120005 machine.go:93] provisionDockerMachine start ...
	I1009 19:19:00.961696  120005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-644179
	I1009 19:19:00.981584  120005 main.go:141] libmachine: Using SSH client type: native
	I1009 19:19:00.981806  120005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1009 19:19:00.981813  120005 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:19:01.128414  120005 main.go:141] libmachine: SSH cmd err, output: <nil>: first-644179
	
	I1009 19:19:01.128431  120005 ubuntu.go:182] provisioning hostname "first-644179"
	I1009 19:19:01.128492  120005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-644179
	I1009 19:19:01.147688  120005 main.go:141] libmachine: Using SSH client type: native
	I1009 19:19:01.147880  120005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1009 19:19:01.147888  120005 main.go:141] libmachine: About to run SSH command:
	sudo hostname first-644179 && echo "first-644179" | sudo tee /etc/hostname
	I1009 19:19:01.304496  120005 main.go:141] libmachine: SSH cmd err, output: <nil>: first-644179
	
	I1009 19:19:01.304557  120005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-644179
	I1009 19:19:01.322887  120005 main.go:141] libmachine: Using SSH client type: native
	I1009 19:19:01.323102  120005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1009 19:19:01.323114  120005 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfirst-644179' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 first-644179/g' /etc/hosts;
				else 
					echo '127.0.1.1 first-644179' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:19:01.468796  120005 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:19:01.468818  120005 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-11374/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-11374/.minikube}
	I1009 19:19:01.468860  120005 ubuntu.go:190] setting up certificates
	I1009 19:19:01.468871  120005 provision.go:84] configureAuth start
	I1009 19:19:01.468964  120005 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" first-644179
	I1009 19:19:01.486581  120005 provision.go:143] copyHostCerts
	I1009 19:19:01.486639  120005 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem, removing ...
	I1009 19:19:01.486648  120005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem
	I1009 19:19:01.486714  120005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/ca.pem (1082 bytes)
	I1009 19:19:01.486807  120005 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem, removing ...
	I1009 19:19:01.486810  120005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem
	I1009 19:19:01.486835  120005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/cert.pem (1123 bytes)
	I1009 19:19:01.486886  120005 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem, removing ...
	I1009 19:19:01.486889  120005 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem
	I1009 19:19:01.486909  120005 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-11374/.minikube/key.pem (1679 bytes)
	I1009 19:19:01.486955  120005 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem org=jenkins.first-644179 san=[127.0.0.1 192.168.58.2 first-644179 localhost minikube]
	I1009 19:19:01.828650  120005 provision.go:177] copyRemoteCerts
	I1009 19:19:01.828696  120005 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:19:01.828764  120005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-644179
	I1009 19:19:01.846647  120005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/first-644179/id_rsa Username:docker}
	I1009 19:19:01.949450  120005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 19:19:01.968898  120005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1009 19:19:01.986543  120005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:19:02.004820  120005 provision.go:87] duration metric: took 535.931332ms to configureAuth
	I1009 19:19:02.004837  120005 ubuntu.go:206] setting minikube options for container-runtime
	I1009 19:19:02.005031  120005 config.go:182] Loaded profile config "first-644179": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 19:19:02.005131  120005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-644179
	I1009 19:19:02.023400  120005 main.go:141] libmachine: Using SSH client type: native
	I1009 19:19:02.023610  120005 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1009 19:19:02.023628  120005 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:19:02.280779  120005 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:19:02.280793  120005 machine.go:96] duration metric: took 1.31914785s to provisionDockerMachine
	I1009 19:19:02.280801  120005 client.go:171] duration metric: took 6.97914786s to LocalClient.Create
	I1009 19:19:02.280817  120005 start.go:167] duration metric: took 6.979191809s to libmachine.API.Create "first-644179"
	I1009 19:19:02.280823  120005 start.go:293] postStartSetup for "first-644179" (driver="docker")
	I1009 19:19:02.280830  120005 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:19:02.280871  120005 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:19:02.280899  120005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-644179
	I1009 19:19:02.298358  120005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/first-644179/id_rsa Username:docker}
	I1009 19:19:02.402049  120005 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:19:02.405485  120005 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 19:19:02.405502  120005 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1009 19:19:02.405511  120005 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/addons for local assets ...
	I1009 19:19:02.405555  120005 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-11374/.minikube/files for local assets ...
	I1009 19:19:02.405638  120005 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem -> 148802.pem in /etc/ssl/certs
	I1009 19:19:02.405721  120005 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:19:02.413116  120005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /etc/ssl/certs/148802.pem (1708 bytes)
	I1009 19:19:02.432607  120005 start.go:296] duration metric: took 151.771887ms for postStartSetup
	I1009 19:19:02.432940  120005 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" first-644179
	I1009 19:19:02.450040  120005 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179/config.json ...
	I1009 19:19:02.450316  120005 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:19:02.450362  120005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-644179
	I1009 19:19:02.467335  120005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/first-644179/id_rsa Username:docker}
	I1009 19:19:02.566299  120005 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 19:19:02.570944  120005 start.go:128] duration metric: took 7.271367887s to createHost
	I1009 19:19:02.570962  120005 start.go:83] releasing machines lock for "first-644179", held for 7.271463397s
	I1009 19:19:02.571026  120005 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" first-644179
	I1009 19:19:02.588883  120005 ssh_runner.go:195] Run: cat /version.json
	I1009 19:19:02.588916  120005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-644179
	I1009 19:19:02.588926  120005 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:19:02.588979  120005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-644179
	I1009 19:19:02.608381  120005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/first-644179/id_rsa Username:docker}
	I1009 19:19:02.608637  120005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/first-644179/id_rsa Username:docker}
	I1009 19:19:02.759871  120005 ssh_runner.go:195] Run: systemctl --version
	I1009 19:19:02.766229  120005 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:19:02.800821  120005 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:19:02.805780  120005 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:19:02.805832  120005 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:19:02.832869  120005 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 19:19:02.832884  120005 start.go:495] detecting cgroup driver to use...
	I1009 19:19:02.832914  120005 detect.go:190] detected "systemd" cgroup driver on host os
	I1009 19:19:02.832961  120005 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:19:02.848594  120005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:19:02.861185  120005 docker.go:218] disabling cri-docker service (if available) ...
	I1009 19:19:02.861222  120005 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:19:02.877947  120005 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:19:02.895196  120005 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:19:02.973318  120005 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:19:03.059289  120005 docker.go:234] disabling docker service ...
	I1009 19:19:03.059335  120005 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:19:03.078566  120005 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:19:03.091270  120005 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:19:03.172993  120005 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:19:03.252957  120005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:19:03.265721  120005 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:19:03.279813  120005 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1009 19:19:03.279872  120005 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:19:03.290163  120005 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1009 19:19:03.290222  120005 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:19:03.299334  120005 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:19:03.307997  120005 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:19:03.316792  120005 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:19:03.325042  120005 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:19:03.333518  120005 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:19:03.347084  120005 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:19:03.355822  120005 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:19:03.363265  120005 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:19:03.370703  120005 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:19:03.448085  120005 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:19:03.553374  120005 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:19:03.553426  120005 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:19:03.557299  120005 start.go:563] Will wait 60s for crictl version
	I1009 19:19:03.557341  120005 ssh_runner.go:195] Run: which crictl
	I1009 19:19:03.561195  120005 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1009 19:19:03.585228  120005 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1009 19:19:03.585289  120005 ssh_runner.go:195] Run: crio --version
	I1009 19:19:03.612519  120005 ssh_runner.go:195] Run: crio --version
	I1009 19:19:03.640530  120005 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1009 19:19:03.641986  120005 cli_runner.go:164] Run: docker network inspect first-644179 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 19:19:03.659360  120005 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1009 19:19:03.663490  120005 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:19:03.673639  120005 kubeadm.go:883] updating cluster {Name:first-644179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-644179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s} ...
	I1009 19:19:03.673769  120005 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1009 19:19:03.673827  120005 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:19:03.706370  120005 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:19:03.706382  120005 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:19:03.706424  120005 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:19:03.730156  120005 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:19:03.730169  120005 cache_images.go:85] Images are preloaded, skipping loading
	I1009 19:19:03.730177  120005 kubeadm.go:934] updating node { 192.168.58.2 8443 v1.34.1 crio true true} ...
	I1009 19:19:03.730260  120005 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=first-644179 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:first-644179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:19:03.730346  120005 ssh_runner.go:195] Run: crio config
	I1009 19:19:03.775871  120005 cni.go:84] Creating CNI manager for ""
	I1009 19:19:03.775885  120005 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 19:19:03.775899  120005 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1009 19:19:03.775916  120005 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:first-644179 NodeName:first-644179 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:19:03.776012  120005 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "first-644179"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.58.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:19:03.776070  120005 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1009 19:19:03.784103  120005 binaries.go:51] Found k8s binaries, skipping transfer
	I1009 19:19:03.784177  120005 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:19:03.791679  120005 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1009 19:19:03.804697  120005 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:19:03.819947  120005 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1009 19:19:03.832331  120005 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1009 19:19:03.835793  120005 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:19:03.845554  120005 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:19:03.926040  120005 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:19:03.948424  120005 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179 for IP: 192.168.58.2
	I1009 19:19:03.948439  120005 certs.go:195] generating shared ca certs ...
	I1009 19:19:03.948457  120005 certs.go:227] acquiring lock for ca certs: {Name:mk6c68a302e39e2a3282a46221ba5eac6f521c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:19:03.948580  120005 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key
	I1009 19:19:03.948610  120005 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key
	I1009 19:19:03.948616  120005 certs.go:257] generating profile certs ...
	I1009 19:19:03.948662  120005 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179/client.key
	I1009 19:19:03.948677  120005 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179/client.crt with IP's: []
	I1009 19:19:04.073863  120005 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179/client.crt ...
	I1009 19:19:04.073879  120005 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179/client.crt: {Name:mk5a58e28954682632db2a943450610d1fdcc828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:19:04.074078  120005 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179/client.key ...
	I1009 19:19:04.074084  120005 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179/client.key: {Name:mk37e37d994e24e521ca77f556d55445d1a78468 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:19:04.074182  120005 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179/apiserver.key.919e1f7f
	I1009 19:19:04.074192  120005 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179/apiserver.crt.919e1f7f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.58.2]
	I1009 19:19:04.179716  120005 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179/apiserver.crt.919e1f7f ...
	I1009 19:19:04.179731  120005 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179/apiserver.crt.919e1f7f: {Name:mk5844c558ad5eb45d4ccaf1ac7f93bc74337e21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:19:04.179890  120005 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179/apiserver.key.919e1f7f ...
	I1009 19:19:04.179898  120005 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179/apiserver.key.919e1f7f: {Name:mkb89d97761bed61ef410413d61e5ddd9b6e0f04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:19:04.179966  120005 certs.go:382] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179/apiserver.crt.919e1f7f -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179/apiserver.crt
	I1009 19:19:04.180053  120005 certs.go:386] copying /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179/apiserver.key.919e1f7f -> /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179/apiserver.key
	I1009 19:19:04.180101  120005 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179/proxy-client.key
	I1009 19:19:04.180114  120005 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179/proxy-client.crt with IP's: []
	I1009 19:19:04.606522  120005 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179/proxy-client.crt ...
	I1009 19:19:04.606536  120005 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179/proxy-client.crt: {Name:mk5c9cf3dbb14bd3a87539091812ba567b6123e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:19:04.606702  120005 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179/proxy-client.key ...
	I1009 19:19:04.606708  120005 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179/proxy-client.key: {Name:mk45d25bbc3e314e2d6de66736b846a72375f19e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:19:04.606883  120005 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem (1338 bytes)
	W1009 19:19:04.606912  120005 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880_empty.pem, impossibly tiny 0 bytes
	I1009 19:19:04.606917  120005 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 19:19:04.606941  120005 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/ca.pem (1082 bytes)
	I1009 19:19:04.606958  120005 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:19:04.606975  120005 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/certs/key.pem (1679 bytes)
	I1009 19:19:04.607017  120005 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem (1708 bytes)
	I1009 19:19:04.607838  120005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:19:04.626848  120005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 19:19:04.645820  120005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:19:04.663630  120005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1009 19:19:04.681520  120005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1009 19:19:04.698573  120005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 19:19:04.715129  120005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:19:04.731675  120005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/first-644179/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:19:04.748289  120005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:19:04.767686  120005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/certs/14880.pem --> /usr/share/ca-certificates/14880.pem (1338 bytes)
	I1009 19:19:04.785199  120005 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/ssl/certs/148802.pem --> /usr/share/ca-certificates/148802.pem (1708 bytes)
	I1009 19:19:04.802785  120005 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:19:04.815572  120005 ssh_runner.go:195] Run: openssl version
	I1009 19:19:04.821857  120005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:19:04.830640  120005 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:19:04.834431  120005 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 17:56 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:19:04.834472  120005 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:19:04.868538  120005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:19:04.877546  120005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14880.pem && ln -fs /usr/share/ca-certificates/14880.pem /etc/ssl/certs/14880.pem"
	I1009 19:19:04.886472  120005 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14880.pem
	I1009 19:19:04.890436  120005 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 18:13 /usr/share/ca-certificates/14880.pem
	I1009 19:19:04.890487  120005 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14880.pem
	I1009 19:19:04.924718  120005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14880.pem /etc/ssl/certs/51391683.0"
	I1009 19:19:04.933667  120005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148802.pem && ln -fs /usr/share/ca-certificates/148802.pem /etc/ssl/certs/148802.pem"
	I1009 19:19:04.942029  120005 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148802.pem
	I1009 19:19:04.945744  120005 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 18:13 /usr/share/ca-certificates/148802.pem
	I1009 19:19:04.945799  120005 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148802.pem
	I1009 19:19:04.980233  120005 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148802.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:19:04.989174  120005 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:19:04.992954  120005 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:19:04.993008  120005 kubeadm.go:400] StartCluster: {Name:first-644179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-644179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Aut
oPauseInterval:1m0s}
	I1009 19:19:04.993095  120005 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:19:04.993168  120005 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:19:05.022741  120005 cri.go:89] found id: ""
	I1009 19:19:05.022796  120005 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:19:05.030949  120005 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:19:05.038986  120005 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:19:05.039040  120005 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:19:05.047000  120005 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:19:05.047011  120005 kubeadm.go:157] found existing configuration files:
	
	I1009 19:19:05.047052  120005 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:19:05.054987  120005 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:19:05.055030  120005 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:19:05.062781  120005 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:19:05.070757  120005 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:19:05.070811  120005 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:19:05.078275  120005 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:19:05.085814  120005 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:19:05.085858  120005 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:19:05.093422  120005 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:19:05.101027  120005 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:19:05.101066  120005 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:19:05.108345  120005 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:19:05.168230  120005 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:19:05.227379  120005 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:23:08.935066  120005 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 19:23:08.935272  120005 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 19:23:08.938216  120005 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:23:08.938283  120005 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:23:08.938422  120005 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:23:08.938502  120005 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:23:08.938546  120005 kubeadm.go:318] OS: Linux
	I1009 19:23:08.938611  120005 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:23:08.938692  120005 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:23:08.938763  120005 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:23:08.938881  120005 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:23:08.938945  120005 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:23:08.939017  120005 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:23:08.939082  120005 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:23:08.939155  120005 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:23:08.939261  120005 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:23:08.939410  120005 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:23:08.939556  120005 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:23:08.939616  120005 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:23:08.941877  120005 out.go:252]   - Generating certificates and keys ...
	I1009 19:23:08.941936  120005 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:23:08.942007  120005 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:23:08.942075  120005 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:23:08.942126  120005 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:23:08.942211  120005 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:23:08.942253  120005 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1009 19:23:08.942309  120005 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1009 19:23:08.942411  120005 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [first-644179 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1009 19:23:08.942464  120005 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1009 19:23:08.942570  120005 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [first-644179 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1009 19:23:08.942653  120005 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:23:08.942723  120005 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:23:08.942780  120005 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1009 19:23:08.942863  120005 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:23:08.942939  120005 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:23:08.943006  120005 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:23:08.943059  120005 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:23:08.943114  120005 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:23:08.943182  120005 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:23:08.943262  120005 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:23:08.943313  120005 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:23:08.944931  120005 out.go:252]   - Booting up control plane ...
	I1009 19:23:08.945010  120005 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:23:08.945088  120005 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:23:08.945165  120005 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:23:08.945281  120005 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:23:08.945378  120005 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:23:08.945456  120005 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:23:08.945544  120005 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:23:08.945599  120005 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:23:08.945717  120005 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:23:08.945853  120005 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:23:08.945897  120005 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.030792ms
	I1009 19:23:08.945967  120005 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:23:08.946032  120005 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	I1009 19:23:08.946116  120005 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:23:08.946200  120005 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:23:08.946255  120005 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000233414s
	I1009 19:23:08.946313  120005 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000544056s
	I1009 19:23:08.946374  120005 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000446057s
	I1009 19:23:08.946377  120005 kubeadm.go:318] 
	I1009 19:23:08.946449  120005 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:23:08.946513  120005 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:23:08.946597  120005 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:23:08.946695  120005 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:23:08.946773  120005 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:23:08.946831  120005 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:23:08.946842  120005 kubeadm.go:318] 
	W1009 19:23:08.946993  120005 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [first-644179 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [first-644179 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.030792ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000233414s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000544056s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000446057s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 19:23:08.947094  120005 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 19:23:09.388960  120005 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:23:09.401366  120005 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 19:23:09.401404  120005 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:23:09.409116  120005 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:23:09.409131  120005 kubeadm.go:157] found existing configuration files:
	
	I1009 19:23:09.409188  120005 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:23:09.417224  120005 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:23:09.417274  120005 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:23:09.424960  120005 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:23:09.432937  120005 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:23:09.432984  120005 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:23:09.441006  120005 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:23:09.448942  120005 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:23:09.448990  120005 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:23:09.456659  120005 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:23:09.464467  120005 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:23:09.464509  120005 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:23:09.472001  120005 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 19:23:09.527934  120005 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1009 19:23:09.585036  120005 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:27:11.753550  120005 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1009 19:27:11.753694  120005 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1009 19:27:11.756914  120005 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1009 19:27:11.756957  120005 kubeadm.go:318] [preflight] Running pre-flight checks
	I1009 19:27:11.757028  120005 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1009 19:27:11.757080  120005 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1009 19:27:11.757108  120005 kubeadm.go:318] OS: Linux
	I1009 19:27:11.757160  120005 kubeadm.go:318] CGROUPS_CPU: enabled
	I1009 19:27:11.757198  120005 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1009 19:27:11.757237  120005 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1009 19:27:11.757311  120005 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1009 19:27:11.757388  120005 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1009 19:27:11.757460  120005 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1009 19:27:11.757501  120005 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1009 19:27:11.757556  120005 kubeadm.go:318] CGROUPS_IO: enabled
	I1009 19:27:11.757611  120005 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:27:11.757699  120005 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:27:11.757817  120005 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:27:11.757902  120005 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:27:11.760473  120005 out.go:252]   - Generating certificates and keys ...
	I1009 19:27:11.760547  120005 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1009 19:27:11.760601  120005 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1009 19:27:11.760659  120005 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 19:27:11.760704  120005 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1009 19:27:11.760775  120005 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 19:27:11.760830  120005 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1009 19:27:11.760891  120005 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1009 19:27:11.760948  120005 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1009 19:27:11.761038  120005 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 19:27:11.761217  120005 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 19:27:11.761272  120005 kubeadm.go:318] [certs] Using the existing "sa" key
	I1009 19:27:11.761317  120005 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:27:11.761354  120005 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:27:11.761404  120005 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:27:11.761443  120005 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:27:11.761490  120005 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:27:11.761542  120005 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:27:11.761613  120005 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:27:11.761670  120005 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:27:11.763059  120005 out.go:252]   - Booting up control plane ...
	I1009 19:27:11.763133  120005 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:27:11.763229  120005 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:27:11.763296  120005 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:27:11.763414  120005 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:27:11.763541  120005 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1009 19:27:11.763666  120005 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1009 19:27:11.763740  120005 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:27:11.763772  120005 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1009 19:27:11.763877  120005 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:27:11.763956  120005 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:27:11.764003  120005 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.80567ms
	I1009 19:27:11.764080  120005 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1009 19:27:11.764168  120005 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	I1009 19:27:11.764243  120005 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1009 19:27:11.764307  120005 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1009 19:27:11.764361  120005 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000294866s
	I1009 19:27:11.764422  120005 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000692889s
	I1009 19:27:11.764483  120005 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000623533s
	I1009 19:27:11.764486  120005 kubeadm.go:318] 
	I1009 19:27:11.764557  120005 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1009 19:27:11.764621  120005 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 19:27:11.764686  120005 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1009 19:27:11.764785  120005 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 19:27:11.764865  120005 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1009 19:27:11.764967  120005 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1009 19:27:11.764986  120005 kubeadm.go:318] 
	I1009 19:27:11.765051  120005 kubeadm.go:402] duration metric: took 8m6.772047994s to StartCluster
	I1009 19:27:11.765097  120005 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 19:27:11.765172  120005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 19:27:11.790717  120005 cri.go:89] found id: ""
	I1009 19:27:11.790738  120005 logs.go:282] 0 containers: []
	W1009 19:27:11.790743  120005 logs.go:284] No container was found matching "kube-apiserver"
	I1009 19:27:11.790748  120005 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 19:27:11.790801  120005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 19:27:11.816755  120005 cri.go:89] found id: ""
	I1009 19:27:11.816768  120005 logs.go:282] 0 containers: []
	W1009 19:27:11.816774  120005 logs.go:284] No container was found matching "etcd"
	I1009 19:27:11.816778  120005 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 19:27:11.816827  120005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 19:27:11.842049  120005 cri.go:89] found id: ""
	I1009 19:27:11.842065  120005 logs.go:282] 0 containers: []
	W1009 19:27:11.842072  120005 logs.go:284] No container was found matching "coredns"
	I1009 19:27:11.842078  120005 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 19:27:11.842170  120005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 19:27:11.868864  120005 cri.go:89] found id: ""
	I1009 19:27:11.868881  120005 logs.go:282] 0 containers: []
	W1009 19:27:11.868890  120005 logs.go:284] No container was found matching "kube-scheduler"
	I1009 19:27:11.868897  120005 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 19:27:11.868948  120005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 19:27:11.894980  120005 cri.go:89] found id: ""
	I1009 19:27:11.894996  120005 logs.go:282] 0 containers: []
	W1009 19:27:11.895034  120005 logs.go:284] No container was found matching "kube-proxy"
	I1009 19:27:11.895041  120005 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 19:27:11.895097  120005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 19:27:11.920213  120005 cri.go:89] found id: ""
	I1009 19:27:11.920227  120005 logs.go:282] 0 containers: []
	W1009 19:27:11.920236  120005 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 19:27:11.920242  120005 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 19:27:11.920307  120005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 19:27:11.946602  120005 cri.go:89] found id: ""
	I1009 19:27:11.946618  120005 logs.go:282] 0 containers: []
	W1009 19:27:11.946627  120005 logs.go:284] No container was found matching "kindnet"
	I1009 19:27:11.946636  120005 logs.go:123] Gathering logs for container status ...
	I1009 19:27:11.946646  120005 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 19:27:11.975485  120005 logs.go:123] Gathering logs for kubelet ...
	I1009 19:27:11.975502  120005 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 19:27:12.041167  120005 logs.go:123] Gathering logs for dmesg ...
	I1009 19:27:12.041187  120005 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 19:27:12.053215  120005 logs.go:123] Gathering logs for describe nodes ...
	I1009 19:27:12.053231  120005 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 19:27:12.110795  120005 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:27:12.103783    2399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:27:12.104304    2399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:27:12.105923    2399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:27:12.106466    2399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:27:12.107741    2399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1009 19:27:12.103783    2399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:27:12.104304    2399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:27:12.105923    2399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:27:12.106466    2399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:27:12.107741    2399 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 19:27:12.110817  120005 logs.go:123] Gathering logs for CRI-O ...
	I1009 19:27:12.110828  120005 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1009 19:27:12.171842  120005 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.80567ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000294866s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000692889s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000623533s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1009 19:27:12.171888  120005 out.go:285] * 
	W1009 19:27:12.171977  120005 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.80567ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000294866s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000692889s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000623533s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:27:12.171996  120005 out.go:285] * 
	W1009 19:27:12.173647  120005 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:27:12.177259  120005 out.go:203] 
	W1009 19:27:12.178449  120005 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.80567ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000294866s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000692889s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000623533s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 19:27:12.178483  120005 out.go:285] * 
	I1009 19:27:12.179799  120005 out.go:203] 
	
	
	==> CRI-O <==
	Oct 09 19:27:05 first-644179 crio[778]: time="2025-10-09T19:27:05.541786306Z" level=info msg="createCtr: removing container 7a1c917cdaba51d11847b44cc82012e08e1b380f8a17ab45d4a66abf6ab6f138" id=6cf219aa-137b-44c4-9775-b13e371ae9b0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:27:05 first-644179 crio[778]: time="2025-10-09T19:27:05.541816916Z" level=info msg="createCtr: deleting container 7a1c917cdaba51d11847b44cc82012e08e1b380f8a17ab45d4a66abf6ab6f138 from storage" id=6cf219aa-137b-44c4-9775-b13e371ae9b0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:27:05 first-644179 crio[778]: time="2025-10-09T19:27:05.543922547Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-first-644179_kube-system_a4941fdf1e7767fde4873c3f06cbec24_0" id=6cf219aa-137b-44c4-9775-b13e371ae9b0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:27:07 first-644179 crio[778]: time="2025-10-09T19:27:07.518720481Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=f03bde4e-c49c-400a-ae8f-375e3cd521c9 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:27:07 first-644179 crio[778]: time="2025-10-09T19:27:07.520757384Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=d0eb86cb-ee96-474b-9aee-2310a6ea1465 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:27:07 first-644179 crio[778]: time="2025-10-09T19:27:07.521673432Z" level=info msg="Creating container: kube-system/kube-apiserver-first-644179/kube-apiserver" id=7df6b2b5-b8f7-49b3-af14-143f2c292a75 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:27:07 first-644179 crio[778]: time="2025-10-09T19:27:07.521918066Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:27:07 first-644179 crio[778]: time="2025-10-09T19:27:07.526481555Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:27:07 first-644179 crio[778]: time="2025-10-09T19:27:07.52702295Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:27:07 first-644179 crio[778]: time="2025-10-09T19:27:07.546485799Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=7df6b2b5-b8f7-49b3-af14-143f2c292a75 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:27:07 first-644179 crio[778]: time="2025-10-09T19:27:07.5479324Z" level=info msg="createCtr: deleting container ID fe60e4654f259aa453c2e7302da416b990b77987aa10c7b2fed6fad71ee76b9a from idIndex" id=7df6b2b5-b8f7-49b3-af14-143f2c292a75 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:27:07 first-644179 crio[778]: time="2025-10-09T19:27:07.547971295Z" level=info msg="createCtr: removing container fe60e4654f259aa453c2e7302da416b990b77987aa10c7b2fed6fad71ee76b9a" id=7df6b2b5-b8f7-49b3-af14-143f2c292a75 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:27:07 first-644179 crio[778]: time="2025-10-09T19:27:07.548002605Z" level=info msg="createCtr: deleting container fe60e4654f259aa453c2e7302da416b990b77987aa10c7b2fed6fad71ee76b9a from storage" id=7df6b2b5-b8f7-49b3-af14-143f2c292a75 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:27:07 first-644179 crio[778]: time="2025-10-09T19:27:07.550395665Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-first-644179_kube-system_15c75024eacc391311f8c3fe4167513b_0" id=7df6b2b5-b8f7-49b3-af14-143f2c292a75 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:27:11 first-644179 crio[778]: time="2025-10-09T19:27:11.518640782Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=2a22fa5c-51d5-40a1-ae38-3ed64274e488 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:27:11 first-644179 crio[778]: time="2025-10-09T19:27:11.519468353Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=7bc27b50-76fe-40a9-a082-6a448ee0f679 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 19:27:11 first-644179 crio[778]: time="2025-10-09T19:27:11.520329475Z" level=info msg="Creating container: kube-system/kube-scheduler-first-644179/kube-scheduler" id=0dbf63f9-c876-47c2-84c4-93eac0339ce3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:27:11 first-644179 crio[778]: time="2025-10-09T19:27:11.520534246Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:27:11 first-644179 crio[778]: time="2025-10-09T19:27:11.523809716Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:27:11 first-644179 crio[778]: time="2025-10-09T19:27:11.524241188Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 09 19:27:11 first-644179 crio[778]: time="2025-10-09T19:27:11.536232509Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=0dbf63f9-c876-47c2-84c4-93eac0339ce3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:27:11 first-644179 crio[778]: time="2025-10-09T19:27:11.537645215Z" level=info msg="createCtr: deleting container ID cbe2a214c5d2090921caca56b86d404ad23886e03ee4a679aa616a270b62e3c3 from idIndex" id=0dbf63f9-c876-47c2-84c4-93eac0339ce3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:27:11 first-644179 crio[778]: time="2025-10-09T19:27:11.537681654Z" level=info msg="createCtr: removing container cbe2a214c5d2090921caca56b86d404ad23886e03ee4a679aa616a270b62e3c3" id=0dbf63f9-c876-47c2-84c4-93eac0339ce3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:27:11 first-644179 crio[778]: time="2025-10-09T19:27:11.537711346Z" level=info msg="createCtr: deleting container cbe2a214c5d2090921caca56b86d404ad23886e03ee4a679aa616a270b62e3c3 from storage" id=0dbf63f9-c876-47c2-84c4-93eac0339ce3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:27:11 first-644179 crio[778]: time="2025-10-09T19:27:11.539805437Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-first-644179_kube-system_c3c77e89a0d5e956cdb5de17139de995_0" id=0dbf63f9-c876-47c2-84c4-93eac0339ce3 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1009 19:27:13.286846    2537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:27:13.287431    2537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:27:13.289001    2537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:27:13.289460    2537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1009 19:27:13.291004    2537 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088019] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.405828] i8042: Warning: Keylock active
	[  +0.010689] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003222] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000705] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000716] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000712] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000666] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000718] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.511743] block sda: the capability attribute has been deprecated.
	[  +0.094464] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026060] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.769962] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 19:27:13 up  2:09,  0 user,  load average: 0.08, 0.13, 0.18
	Linux first-644179 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 09 19:27:05 first-644179 kubelet[1771]: E1009 19:27:05.544425    1771 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:27:05 first-644179 kubelet[1771]:         container etcd start failed in pod etcd-first-644179_kube-system(a4941fdf1e7767fde4873c3f06cbec24): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:27:05 first-644179 kubelet[1771]:  > logger="UnhandledError"
	Oct 09 19:27:05 first-644179 kubelet[1771]: E1009 19:27:05.544456    1771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-first-644179" podUID="a4941fdf1e7767fde4873c3f06cbec24"
	Oct 09 19:27:07 first-644179 kubelet[1771]: E1009 19:27:07.518302    1771 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"first-644179\" not found" node="first-644179"
	Oct 09 19:27:07 first-644179 kubelet[1771]: E1009 19:27:07.550703    1771 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:27:07 first-644179 kubelet[1771]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:27:07 first-644179 kubelet[1771]:  > podSandboxID="a7ba0f206c3b0aa18be685a12e09846e81891e5c2b620bbb9319a2e14d53a10e"
	Oct 09 19:27:07 first-644179 kubelet[1771]: E1009 19:27:07.550797    1771 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:27:07 first-644179 kubelet[1771]:         container kube-apiserver start failed in pod kube-apiserver-first-644179_kube-system(15c75024eacc391311f8c3fe4167513b): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:27:07 first-644179 kubelet[1771]:  > logger="UnhandledError"
	Oct 09 19:27:07 first-644179 kubelet[1771]: E1009 19:27:07.550825    1771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-first-644179" podUID="15c75024eacc391311f8c3fe4167513b"
	Oct 09 19:27:08 first-644179 kubelet[1771]: E1009 19:27:08.142341    1771 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.58.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/first-644179?timeout=10s\": dial tcp 192.168.58.2:8443: connect: connection refused" interval="7s"
	Oct 09 19:27:08 first-644179 kubelet[1771]: I1009 19:27:08.294759    1771 kubelet_node_status.go:75] "Attempting to register node" node="first-644179"
	Oct 09 19:27:08 first-644179 kubelet[1771]: E1009 19:27:08.295166    1771 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.58.2:8443/api/v1/nodes\": dial tcp 192.168.58.2:8443: connect: connection refused" node="first-644179"
	Oct 09 19:27:08 first-644179 kubelet[1771]: E1009 19:27:08.406934    1771 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.58.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dfirst-644179&limit=500&resourceVersion=0\": dial tcp 192.168.58.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 09 19:27:11 first-644179 kubelet[1771]: E1009 19:27:11.518227    1771 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"first-644179\" not found" node="first-644179"
	Oct 09 19:27:11 first-644179 kubelet[1771]: E1009 19:27:11.530289    1771 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"first-644179\" not found"
	Oct 09 19:27:11 first-644179 kubelet[1771]: E1009 19:27:11.540070    1771 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 09 19:27:11 first-644179 kubelet[1771]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:27:11 first-644179 kubelet[1771]:  > podSandboxID="f86653bc1300dbb6dd411d74a0b76b98f3dd381062e4f1bba6c71aebed0ab77d"
	Oct 09 19:27:11 first-644179 kubelet[1771]: E1009 19:27:11.540189    1771 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 09 19:27:11 first-644179 kubelet[1771]:         container kube-scheduler start failed in pod kube-scheduler-first-644179_kube-system(c3c77e89a0d5e956cdb5de17139de995): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 09 19:27:11 first-644179 kubelet[1771]:  > logger="UnhandledError"
	Oct 09 19:27:11 first-644179 kubelet[1771]: E1009 19:27:11.540219    1771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-first-644179" podUID="c3c77e89a0d5e956cdb5de17139de995"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p first-644179 -n first-644179
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p first-644179 -n first-644179: exit status 6 (293.567341ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:27:13.661052  125403 status.go:458] kubeconfig endpoint: get endpoint: "first-644179" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "first-644179" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "first-644179" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-644179
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-644179: (1.907908386s)
--- FAIL: TestMinikubeProfile (500.52s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (7200.056s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-352628
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-352628-m01 --driver=docker  --container-runtime=crio
E1009 19:51:57.703345   14880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1009 19:55:34.617301   14880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic: test timed out after 2h0m0s
	running tests:
		TestMultiNode (28m31s)
		TestMultiNode/serial (28m31s)
		TestMultiNode/serial/ValidateNameConflict (5m22s)

                                                
                                                
goroutine 2079 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2484 +0x394
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 28 minutes]:
testing.(*T).Run(0xc0005828c0, {0x32044fa?, 0xc00072da88?}, 0x3c52ee0)
	/usr/local/go/src/testing/testing.go:1859 +0x431
testing.runTests.func1(0xc0005828c0)
	/usr/local/go/src/testing/testing.go:2279 +0x37
testing.tRunner(0xc0005828c0, 0xc00072dbc8)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
testing.runTests(0xc0000101b0, {0x5c636c0, 0x2c, 0x2c}, {0xffffffffffffffff?, 0xc00030c4e0?, 0x5c8bdc0?})
	/usr/local/go/src/testing/testing.go:2277 +0x4b4
testing.(*M).Run(0xc000711e00)
	/usr/local/go/src/testing/testing.go:2142 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc000711e00)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0xdb
main.main()
	_testmain.go:133 +0xa8

                                                
                                                
goroutine 115 [chan receive, 119 minutes]:
testing.(*T).Parallel(0xc000602380)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000602380)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestOffline(0xc000602380)
	/home/jenkins/workspace/Build_Cross/test/integration/aab_offline_test.go:32 +0x39
testing.tRunner(0xc000602380, 0x3c52ef8)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 127 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc0006028c0)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc0006028c0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestCertExpiration(0xc0006028c0)
	/home/jenkins/workspace/Build_Cross/test/integration/cert_options_test.go:115 +0x39
testing.tRunner(0xc0006028c0, 0x3c52df0)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 148 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc000603180)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000603180)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestKVMDriverInstallOrUpdate(0xc000603180)
	/home/jenkins/workspace/Build_Cross/test/integration/driver_install_or_update_test.go:48 +0xb3
testing.tRunner(0xc000603180, 0x3c52e88)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 146 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc000602e00)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000602e00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestForceSystemdEnv(0xc000602e00)
	/home/jenkins/workspace/Build_Cross/test/integration/docker_test.go:146 +0xb3
testing.tRunner(0xc000602e00, 0x3c52e38)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 129 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc000602c40)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000602c40)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestForceSystemdFlag(0xc000602c40)
	/home/jenkins/workspace/Build_Cross/test/integration/docker_test.go:83 +0xb3
testing.tRunner(0xc000602c40, 0x3c52e40)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 649 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc0016f6000, 0xc001579650)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 648
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 126 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc000602700)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000602700)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x59
k8s.io/minikube/test/integration.TestCertOptions(0xc000602700)
	/home/jenkins/workspace/Build_Cross/test/integration/cert_options_test.go:36 +0xb3
testing.tRunner(0xc000602700, 0x3c52df8)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 1834 [chan receive, 28 minutes]:
testing.(*T).Run(0xc0014a6540, {0x31f4138?, 0x1a3185c5000?}, 0xc00076aa50)
	/usr/local/go/src/testing/testing.go:1859 +0x431
k8s.io/minikube/test/integration.TestMultiNode(0xc0014a6540)
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:59 +0x3c5
testing.tRunner(0xc0014a6540, 0x3c52ee0)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 190 [IO wait, 102 minutes]:
internal/poll.runtime_pollWait(0x7c612f674b98, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0001f6600?, 0x900000036?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0001f6600)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc0001f6600)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc000132bc0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1b
net.(*TCPListener).Accept(0xc000132bc0)
	/usr/local/go/src/net/tcpsock.go:380 +0x30
net/http.(*Server).Serve(0xc0001fe700, {0x3f9d0b0, 0xc000132bc0})
	/usr/local/go/src/net/http/server.go:3424 +0x30c
net/http.(*Server).ListenAndServe(0xc0001fe700)
	/usr/local/go/src/net/http/server.go:3350 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 187
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x129

                                                
                                                
goroutine 2046 [syscall, 5 minutes]:
syscall.Syscall6(0xf7, 0x3, 0xd, 0xc00072fa08, 0x4, 0xc0000890e0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:95 +0x39
internal/syscall/unix.Waitid(0xc00072fa36?, 0xc00072fb60?, 0x5930ab?, 0x7ffc0dee81ad?, 0x0?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x39
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:106
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:251
os.(*Process).pidfdWait(0xc001538018?)
	/usr/local/go/src/os/pidfd_linux.go:105 +0x209
os.(*Process).wait(0xc000680008?)
	/usr/local/go/src/os/exec_unix.go:27 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc0001b1980)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc0001b1980)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc00171d340, 0xc0001b1980)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateNameConflict({0x3faf7d0, 0xc00030eee0}, 0xc00171d340, {0xc0006a42b0, 0x10})
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:464 +0x48d
k8s.io/minikube/test/integration.TestMultiNode.func1.1(0xc00171d340?)
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:86 +0x6b
testing.tRunner(0xc00171d340, 0xc000810180)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1842
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 437 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3fafb50, 0xc0000844d0}, 0xc0000b5f50, 0xc0000c8f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3fafb50, 0xc0000844d0}, 0xf0?, 0xc0000b5f50, 0xc0000b5f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3fafb50?, 0xc0000844d0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593245?, 0xc00080e480?, 0xc0015783f0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 473
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x286

                                                
                                                
goroutine 581 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc0002d4a80, 0xc001578620)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 580
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 750 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc00080e780, 0xc001578930)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 386
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 436 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000810a50, 0x23)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc001567ce0?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3fc5640)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00075df20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x41302c?, 0x2e01140?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x13
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x3fafb50?, 0xc0000844d0?}, 0x41b1b4?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x51
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x3fafb50, 0xc0000844d0}, 0xc001567f50, {0x3f66b60, 0xc001532930}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xe5
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008507d0?, {0x3f66b60?, 0xc001532930?}, 0x55?, 0xc0003ba000?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x46
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001738e40, 0x3b9aca00, 0x0, 0x1, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 473
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x1d9

                                                
                                                
goroutine 472 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3fc2240, {{0x3fb7268, 0xc0002483c0?}, 0xc000084e00?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x378
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 471
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x272

                                                
                                                
goroutine 438 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 437
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1842 [chan receive, 5 minutes]:
testing.(*T).Run(0xc000683c00, {0x3219093?, 0x4097be4?}, 0xc000810180)
	/usr/local/go/src/testing/testing.go:1859 +0x431
k8s.io/minikube/test/integration.TestMultiNode.func1(0xc000683c00)
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:84 +0x17d
testing.tRunner(0xc000683c00, 0xc00076aa50)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1834
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 473 [chan receive, 75 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0xc00075df20, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x295
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 471
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x614

                                                
                                                
goroutine 2076 [IO wait, 5 minutes]:
internal/poll.runtime_pollWait(0x7c612f6741c0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc00078c7e0?, 0xc001482a8f?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00078c7e0, {0xc001482a8f, 0x571, 0x571})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000616018, {0xc001482a8f?, 0x41835f?, 0x2c44020?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc001a9e0f0, {0x3f64f60, 0xc000782108})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f650e0, 0xc001a9e0f0}, {0x3f64f60, 0xc000782108}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000616018?, {0x3f650e0, 0xc001a9e0f0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000616018, {0x3f650e0, 0xc001a9e0f0})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3f650e0, 0xc001a9e0f0}, {0x3f64fe0, 0xc000616018}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0x7272452064656c64?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2046
	/usr/local/go/src/os/exec/exec.go:748 +0x92b

                                                
                                                
goroutine 2077 [IO wait]:
internal/poll.runtime_pollWait(0x7c612f674620, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc00078c900?, 0xc0004d7769?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00078c900, {0xc0004d7769, 0x897, 0x897})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000616030, {0xc0004d7769?, 0x41835f?, 0x2c44020?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc001a9e120, {0x3f64f60, 0xc000822048})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f650e0, 0xc001a9e120}, {0x3f64f60, 0xc000822048}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000616030?, {0x3f650e0, 0xc001a9e120})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000616030, {0x3f650e0, 0xc001a9e120})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3f650e0, 0xc001a9e120}, {0x3f64fe0, 0xc000616030}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc000810180?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2046
	/usr/local/go/src/os/exec/exec.go:748 +0x92b

                                                
                                                
goroutine 2078 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc0001b1980, 0xc0007d4620)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 2046
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                    

Test pass (92/166)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.96
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 3.86
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 0.39
21 TestBinaryMirror 0.81
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
39 TestErrorSpam/start 0.63
40 TestErrorSpam/status 0.85
41 TestErrorSpam/pause 1.3
42 TestErrorSpam/unpause 1.3
43 TestErrorSpam/stop 1.39
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
50 TestFunctional/serial/KubeContext 0.05
54 TestFunctional/serial/CacheCmd/cache/add_remote 2.71
55 TestFunctional/serial/CacheCmd/cache/add_local 0.82
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
57 TestFunctional/serial/CacheCmd/cache/list 0.05
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
59 TestFunctional/serial/CacheCmd/cache/cache_reload 1.54
60 TestFunctional/serial/CacheCmd/cache/delete 0.1
65 TestFunctional/serial/LogsCmd 0.88
66 TestFunctional/serial/LogsFileCmd 0.92
69 TestFunctional/parallel/ConfigCmd 0.37
71 TestFunctional/parallel/DryRun 0.39
72 TestFunctional/parallel/InternationalLanguage 0.17
78 TestFunctional/parallel/AddonsCmd 0.13
81 TestFunctional/parallel/SSHCmd 0.71
82 TestFunctional/parallel/CpCmd 1.9
84 TestFunctional/parallel/FileSync 0.28
85 TestFunctional/parallel/CertSync 1.72
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
93 TestFunctional/parallel/License 0.35
100 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
106 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
107 TestFunctional/parallel/ProfileCmd/profile_list 0.39
108 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
110 TestFunctional/parallel/Version/short 0.06
111 TestFunctional/parallel/Version/components 0.5
112 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
113 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
114 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
115 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
116 TestFunctional/parallel/ImageCommands/ImageBuild 2.79
117 TestFunctional/parallel/ImageCommands/Setup 0.48
118 TestFunctional/parallel/MountCmd/specific-port 2.08
121 TestFunctional/parallel/MountCmd/VerifyCleanup 1.86
124 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
127 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
128 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
129 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/delete_echo-server_images 0.04
135 TestFunctional/delete_my-image_image 0.02
136 TestFunctional/delete_minikube_cached_images 0.02
164 TestJSONOutput/start/Audit 0
169 TestJSONOutput/pause/Command 0.46
170 TestJSONOutput/pause/Audit 0
172 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/unpause/Command 0.43
176 TestJSONOutput/unpause/Audit 0
178 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/stop/Command 1.22
182 TestJSONOutput/stop/Audit 0
184 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
186 TestErrorJSONOutput 0.21
188 TestKicCustomNetwork/create_custom_network 28.87
189 TestKicCustomNetwork/use_default_bridge_network 25.25
190 TestKicExistingNetwork 24.48
191 TestKicCustomSubnet 27.99
192 TestKicStaticIP 26.14
193 TestMainNoArgs 0.05
197 TestMountStart/serial/StartWithMountFirst 5.06
198 TestMountStart/serial/VerifyMountFirst 0.27
199 TestMountStart/serial/StartWithMountSecond 5.15
200 TestMountStart/serial/VerifyMountSecond 0.26
201 TestMountStart/serial/DeleteFirst 1.65
202 TestMountStart/serial/VerifyMountPostDelete 0.27
203 TestMountStart/serial/Stop 1.2
204 TestMountStart/serial/RestartStopped 7.28
205 TestMountStart/serial/VerifyMountPostStop 0.27
x
+
TestDownloadOnly/v1.28.0/json-events (5.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-837534 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-837534 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.958342968s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1009 17:56:16.078663   14880 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1009 17:56:16.078760   14880 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-837534
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-837534: exit status 85 (63.90521ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-837534 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-837534 │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 17:56:10
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 17:56:10.160408   14892 out.go:360] Setting OutFile to fd 1 ...
	I1009 17:56:10.160515   14892 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 17:56:10.160525   14892 out.go:374] Setting ErrFile to fd 2...
	I1009 17:56:10.160529   14892 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 17:56:10.160749   14892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	W1009 17:56:10.160883   14892 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21139-11374/.minikube/config/config.json: open /home/jenkins/minikube-integration/21139-11374/.minikube/config/config.json: no such file or directory
	I1009 17:56:10.161401   14892 out.go:368] Setting JSON to true
	I1009 17:56:10.162383   14892 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2318,"bootTime":1760030252,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 17:56:10.162470   14892 start.go:141] virtualization: kvm guest
	I1009 17:56:10.164736   14892 out.go:99] [download-only-837534] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 17:56:10.164879   14892 notify.go:220] Checking for updates...
	W1009 17:56:10.164904   14892 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball: no such file or directory
	I1009 17:56:10.166191   14892 out.go:171] MINIKUBE_LOCATION=21139
	I1009 17:56:10.167789   14892 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 17:56:10.169240   14892 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 17:56:10.170574   14892 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 17:56:10.171752   14892 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1009 17:56:10.173937   14892 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1009 17:56:10.174131   14892 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 17:56:10.197872   14892 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 17:56:10.197953   14892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 17:56:10.609626   14892 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-09 17:56:10.598337176 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 17:56:10.609726   14892 docker.go:318] overlay module found
	I1009 17:56:10.611717   14892 out.go:99] Using the docker driver based on user configuration
	I1009 17:56:10.611757   14892 start.go:305] selected driver: docker
	I1009 17:56:10.611763   14892 start.go:925] validating driver "docker" against <nil>
	I1009 17:56:10.611859   14892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 17:56:10.668590   14892 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-09 17:56:10.659050998 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 17:56:10.668768   14892 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 17:56:10.669326   14892 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1009 17:56:10.669490   14892 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 17:56:10.671566   14892 out.go:171] Using Docker driver with root privileges
	I1009 17:56:10.673289   14892 cni.go:84] Creating CNI manager for ""
	I1009 17:56:10.673368   14892 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 17:56:10.673386   14892 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 17:56:10.673457   14892 start.go:349] cluster config:
	{Name:download-only-837534 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-837534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 17:56:10.675198   14892 out.go:99] Starting "download-only-837534" primary control-plane node in "download-only-837534" cluster
	I1009 17:56:10.675240   14892 cache.go:133] Beginning downloading kic base image for docker with crio
	I1009 17:56:10.676823   14892 out.go:99] Pulling base image v0.0.48-1759745255-21703 ...
	I1009 17:56:10.676869   14892 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1009 17:56:10.676918   14892 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local docker daemon
	I1009 17:56:10.693234   14892 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1009 17:56:10.693269   14892 cache.go:64] Caching tarball of preloaded images
	I1009 17:56:10.693438   14892 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1009 17:56:10.695291   14892 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1009 17:56:10.695320   14892 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1009 17:56:10.695727   14892 cache.go:162] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1009 17:56:10.695914   14892 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 in local cache directory
	I1009 17:56:10.696101   14892 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 to local cache
	I1009 17:56:10.718166   14892 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1009 17:56:10.718282   14892 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1009 17:56:13.473525   14892 cache.go:67] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1009 17:56:13.473868   14892 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/download-only-837534/config.json ...
	I1009 17:56:13.473897   14892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/download-only-837534/config.json: {Name:mkfa38c3c41dcebe6cb0e1726f5561238ccc93b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 17:56:13.474062   14892 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1009 17:56:13.474261   14892 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21139-11374/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-837534 host does not exist
	  To start a cluster, run: "minikube start -p download-only-837534"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-837534
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-240600 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-240600 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.863658527s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1009 17:56:20.367306   14880 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1009 17:56:20.367342   14880 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-11374/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-240600
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-240600: exit status 85 (61.559037ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-837534 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-837534 │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │ 09 Oct 25 17:56 UTC │
	│ delete  │ -p download-only-837534                                                                                                                                                   │ download-only-837534 │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │ 09 Oct 25 17:56 UTC │
	│ start   │ -o=json --download-only -p download-only-240600 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-240600 │ jenkins │ v1.37.0 │ 09 Oct 25 17:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/09 17:56:16
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 17:56:16.543231   15249 out.go:360] Setting OutFile to fd 1 ...
	I1009 17:56:16.543711   15249 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 17:56:16.543800   15249 out.go:374] Setting ErrFile to fd 2...
	I1009 17:56:16.543810   15249 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 17:56:16.544310   15249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 17:56:16.545256   15249 out.go:368] Setting JSON to true
	I1009 17:56:16.546011   15249 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2325,"bootTime":1760030252,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 17:56:16.546099   15249 start.go:141] virtualization: kvm guest
	I1009 17:56:16.547981   15249 out.go:99] [download-only-240600] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 17:56:16.548126   15249 notify.go:220] Checking for updates...
	I1009 17:56:16.549480   15249 out.go:171] MINIKUBE_LOCATION=21139
	I1009 17:56:16.551171   15249 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 17:56:16.552701   15249 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 17:56:16.556769   15249 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 17:56:16.558271   15249 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1009 17:56:16.560592   15249 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1009 17:56:16.560785   15249 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 17:56:16.584812   15249 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 17:56:16.584931   15249 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 17:56:16.645125   15249 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-09 17:56:16.633958132 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 17:56:16.645246   15249 docker.go:318] overlay module found
	I1009 17:56:16.647393   15249 out.go:99] Using the docker driver based on user configuration
	I1009 17:56:16.647421   15249 start.go:305] selected driver: docker
	I1009 17:56:16.647426   15249 start.go:925] validating driver "docker" against <nil>
	I1009 17:56:16.647521   15249 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 17:56:16.704269   15249 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-09 17:56:16.695473283 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 17:56:16.704421   15249 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1009 17:56:16.704850   15249 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1009 17:56:16.704998   15249 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 17:56:16.706859   15249 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-240600 host does not exist
	  To start a cluster, run: "minikube start -p download-only-240600"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-240600
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.39s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-360662 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-360662" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-360662
--- PASS: TestDownloadOnlyKic (0.39s)

                                                
                                    
x
+
TestBinaryMirror (0.81s)

                                                
                                                
=== RUN   TestBinaryMirror
I1009 17:56:21.443239   14880 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-609906 --alsologtostderr --binary-mirror http://127.0.0.1:44531 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-609906" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-609906
--- PASS: TestBinaryMirror (0.81s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-246638
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-246638: exit status 85 (51.823909ms)

                                                
                                                
-- stdout --
	* Profile "addons-246638" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-246638"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-246638
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-246638: exit status 85 (52.153358ms)

                                                
                                                
-- stdout --
	* Profile "addons-246638" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-246638"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestErrorSpam/start (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-663194 --log_dir /tmp/nospam-663194 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-663194 --log_dir /tmp/nospam-663194 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-663194 --log_dir /tmp/nospam-663194 start --dry-run
--- PASS: TestErrorSpam/start (0.63s)

                                                
                                    
x
+
TestErrorSpam/status (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-663194 --log_dir /tmp/nospam-663194 status
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-663194 --log_dir /tmp/nospam-663194 status: exit status 6 (285.892811ms)

                                                
                                                
-- stdout --
	nospam-663194
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:13:23.494948   26994 status.go:458] kubeconfig endpoint: get endpoint: "nospam-663194" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-663194 --log_dir /tmp/nospam-663194 status" failed: exit status 6
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-663194 --log_dir /tmp/nospam-663194 status
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-663194 --log_dir /tmp/nospam-663194 status: exit status 6 (283.63306ms)

                                                
                                                
-- stdout --
	nospam-663194
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:13:23.778703   27107 status.go:458] kubeconfig endpoint: get endpoint: "nospam-663194" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-663194 --log_dir /tmp/nospam-663194 status" failed: exit status 6
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-663194 --log_dir /tmp/nospam-663194 status
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-663194 --log_dir /tmp/nospam-663194 status: exit status 6 (284.06039ms)

                                                
                                                
-- stdout --
	nospam-663194
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 18:13:24.062801   27239 status.go:458] kubeconfig endpoint: get endpoint: "nospam-663194" does not appear in /home/jenkins/minikube-integration/21139-11374/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-663194 --log_dir /tmp/nospam-663194 status" failed: exit status 6
--- PASS: TestErrorSpam/status (0.85s)

                                                
                                    
x
+
TestErrorSpam/pause (1.3s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-663194 --log_dir /tmp/nospam-663194 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-663194 --log_dir /tmp/nospam-663194 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-663194 --log_dir /tmp/nospam-663194 pause
--- PASS: TestErrorSpam/pause (1.30s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.3s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-663194 --log_dir /tmp/nospam-663194 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-663194 --log_dir /tmp/nospam-663194 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-663194 --log_dir /tmp/nospam-663194 unpause
--- PASS: TestErrorSpam/unpause (1.30s)

                                                
                                    
x
+
TestErrorSpam/stop (1.39s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-663194 --log_dir /tmp/nospam-663194 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-663194 --log_dir /tmp/nospam-663194 stop: (1.20529022s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-663194 --log_dir /tmp/nospam-663194 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-663194 --log_dir /tmp/nospam-663194 stop
--- PASS: TestErrorSpam/stop (1.39s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21139-11374/.minikube/files/etc/test/nested/copy/14880/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-753440 /tmp/TestFunctionalserialCacheCmdcacheadd_local55621729/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 cache add minikube-local-cache-test:functional-753440
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 cache delete minikube-local-cache-test:functional-753440
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-753440
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753440 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (277.79535ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.88s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 logs
--- PASS: TestFunctional/serial/LogsCmd (0.88s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 logs --file /tmp/TestFunctionalserialLogsFileCmd3511182888/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753440 config get cpus: exit status 14 (59.271456ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753440 config get cpus: exit status 14 (58.079019ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-753440 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-753440 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (156.918057ms)

                                                
                                                
-- stdout --
	* [functional-753440] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:40:40.672043   59467 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:40:40.672355   59467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:40:40.672366   59467 out.go:374] Setting ErrFile to fd 2...
	I1009 18:40:40.672373   59467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:40:40.672597   59467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:40:40.673047   59467 out.go:368] Setting JSON to false
	I1009 18:40:40.674009   59467 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4989,"bootTime":1760030252,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:40:40.674107   59467 start.go:141] virtualization: kvm guest
	I1009 18:40:40.676700   59467 out.go:179] * [functional-753440] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1009 18:40:40.678343   59467 notify.go:220] Checking for updates...
	I1009 18:40:40.678372   59467 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:40:40.679943   59467 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:40:40.681606   59467 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:40:40.682998   59467 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:40:40.684463   59467 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:40:40.687804   59467 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:40:40.689796   59467 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:40:40.690522   59467 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:40:40.713860   59467 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:40:40.714013   59467 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:40:40.770222   59467 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:40:40.758694047 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:40:40.770362   59467 docker.go:318] overlay module found
	I1009 18:40:40.772297   59467 out.go:179] * Using the docker driver based on existing profile
	I1009 18:40:40.773640   59467 start.go:305] selected driver: docker
	I1009 18:40:40.773657   59467 start.go:925] validating driver "docker" against &{Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:40:40.773730   59467 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:40:40.775742   59467 out.go:203] 
	W1009 18:40:40.776979   59467 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1009 18:40:40.778490   59467 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-753440 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-753440 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-753440 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (166.966607ms)

                                                
                                                
-- stdout --
	* [functional-753440] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 18:40:41.059621   59814 out.go:360] Setting OutFile to fd 1 ...
	I1009 18:40:41.059885   59814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:40:41.059896   59814 out.go:374] Setting ErrFile to fd 2...
	I1009 18:40:41.059899   59814 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1009 18:40:41.060215   59814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
	I1009 18:40:41.060650   59814 out.go:368] Setting JSON to false
	I1009 18:40:41.061515   59814 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4989,"bootTime":1760030252,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:40:41.061609   59814 start.go:141] virtualization: kvm guest
	I1009 18:40:41.063781   59814 out.go:179] * [functional-753440] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1009 18:40:41.065771   59814 out.go:179]   - MINIKUBE_LOCATION=21139
	I1009 18:40:41.065764   59814 notify.go:220] Checking for updates...
	I1009 18:40:41.068913   59814 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:40:41.070481   59814 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig
	I1009 18:40:41.071797   59814 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube
	I1009 18:40:41.073119   59814 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:40:41.074623   59814 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:40:41.076619   59814 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1009 18:40:41.077037   59814 driver.go:421] Setting default libvirt URI to qemu:///system
	I1009 18:40:41.102735   59814 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1009 18:40:41.102838   59814 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:40:41.165489   59814 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-09 18:40:41.154761452 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652162560 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.42] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:40:41.165636   59814 docker.go:318] overlay module found
	I1009 18:40:41.167894   59814 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1009 18:40:41.169565   59814 start.go:305] selected driver: docker
	I1009 18:40:41.169585   59814 start.go:925] validating driver "docker" against &{Name:functional-753440 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759745255-21703@sha256:cb5cd2ea26aaf2d64a5ec385670af2f770e759461e4b662fd7a8fae305b74c92 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-753440 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:40:41.169700   59814 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:40:41.172117   59814 out.go:203] 
	W1009 18:40:41.173651   59814 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1009 18:40:41.175097   59814 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh -n functional-753440 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 cp functional-753440:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd806855305/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh -n functional-753440 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh -n functional-753440 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/14880/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh "sudo cat /etc/test/nested/copy/14880/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/14880.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh "sudo cat /etc/ssl/certs/14880.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/14880.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh "sudo cat /usr/share/ca-certificates/14880.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/148802.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh "sudo cat /etc/ssl/certs/148802.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/148802.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh "sudo cat /usr/share/ca-certificates/148802.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753440 ssh "sudo systemctl is-active docker": exit status 1 (282.085446ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh "sudo systemctl is-active containerd"
I1009 18:40:41.570218   14880 retry.go:31] will retry after 3.485612175s: Temporary Error: Get "http:": http: no Host in request URL
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753440 ssh "sudo systemctl is-active containerd": exit status 1 (288.5536ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-753440 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "338.469475ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "54.833653ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "354.101062ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "54.557356ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-753440 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-753440 image ls --format short --alsologtostderr:
I1009 18:40:49.083826   64470 out.go:360] Setting OutFile to fd 1 ...
I1009 18:40:49.084088   64470 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:40:49.084102   64470 out.go:374] Setting ErrFile to fd 2...
I1009 18:40:49.084107   64470 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:40:49.084364   64470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
I1009 18:40:49.084963   64470 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:40:49.085068   64470 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:40:49.085489   64470 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
I1009 18:40:49.105409   64470 ssh_runner.go:195] Run: systemctl --version
I1009 18:40:49.105470   64470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
I1009 18:40:49.123977   64470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
I1009 18:40:49.229541   64470 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-753440 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-753440 image ls --format table --alsologtostderr:
I1009 18:40:50.398050   65194 out.go:360] Setting OutFile to fd 1 ...
I1009 18:40:50.398344   65194 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:40:50.398354   65194 out.go:374] Setting ErrFile to fd 2...
I1009 18:40:50.398358   65194 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:40:50.398563   65194 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
I1009 18:40:50.399095   65194 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:40:50.399214   65194 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:40:50.399593   65194 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
I1009 18:40:50.417856   65194 ssh_runner.go:195] Run: systemctl --version
I1009 18:40:50.417929   65194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
I1009 18:40:50.436077   65194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
I1009 18:40:50.536789   65194 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-753440 image ls --format json --alsologtostderr:
[{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449
f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6
a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["reg
istry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-753440 image ls --format json --alsologtostderr:
I1009 18:40:50.186291   65140 out.go:360] Setting OutFile to fd 1 ...
I1009 18:40:50.186572   65140 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:40:50.186582   65140 out.go:374] Setting ErrFile to fd 2...
I1009 18:40:50.186588   65140 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:40:50.186800   65140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
I1009 18:40:50.187406   65140 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:40:50.187532   65140 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:40:50.187908   65140 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
I1009 18:40:50.205839   65140 ssh_runner.go:195] Run: systemctl --version
I1009 18:40:50.205893   65140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
I1009 18:40:50.223890   65140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
I1009 18:40:50.325853   65140 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-753440 image ls --format yaml --alsologtostderr:
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-753440 image ls --format yaml --alsologtostderr:
I1009 18:40:49.312624   64607 out.go:360] Setting OutFile to fd 1 ...
I1009 18:40:49.312883   64607 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:40:49.312892   64607 out.go:374] Setting ErrFile to fd 2...
I1009 18:40:49.312896   64607 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:40:49.313061   64607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
I1009 18:40:49.313639   64607 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:40:49.313737   64607 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:40:49.315427   64607 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
I1009 18:40:49.336514   64607 ssh_runner.go:195] Run: systemctl --version
I1009 18:40:49.336560   64607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
I1009 18:40:49.356948   64607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
I1009 18:40:49.459283   64607 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753440 ssh pgrep buildkitd: exit status 1 (278.722741ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 image build -t localhost/my-image:functional-753440 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-753440 image build -t localhost/my-image:functional-753440 testdata/build --alsologtostderr: (2.28426757s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-753440 image build -t localhost/my-image:functional-753440 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> e4f53fd49b7
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-753440
--> 1d24c74efdd
Successfully tagged localhost/my-image:functional-753440
1d24c74efdd8762297be3ed06fc272bbc5b110da11602f469932b948da7b9956
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-753440 image build -t localhost/my-image:functional-753440 testdata/build --alsologtostderr:
I1009 18:40:49.818998   64947 out.go:360] Setting OutFile to fd 1 ...
I1009 18:40:49.819799   64947 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:40:49.819812   64947 out.go:374] Setting ErrFile to fd 2...
I1009 18:40:49.819817   64947 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1009 18:40:49.820025   64947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-11374/.minikube/bin
I1009 18:40:49.820666   64947 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:40:49.821698   64947 config.go:182] Loaded profile config "functional-753440": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1009 18:40:49.822082   64947 cli_runner.go:164] Run: docker container inspect functional-753440 --format={{.State.Status}}
I1009 18:40:49.841459   64947 ssh_runner.go:195] Run: systemctl --version
I1009 18:40:49.841524   64947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-753440
I1009 18:40:49.860847   64947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-11374/.minikube/machines/functional-753440/id_rsa Username:docker}
I1009 18:40:49.964789   64947 build_images.go:161] Building image from path: /tmp/build.1825600389.tar
I1009 18:40:49.964854   64947 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1009 18:40:49.973043   64947 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1825600389.tar
I1009 18:40:49.976775   64947 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1825600389.tar: stat -c "%s %y" /var/lib/minikube/build/build.1825600389.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1825600389.tar': No such file or directory
I1009 18:40:49.976803   64947 ssh_runner.go:362] scp /tmp/build.1825600389.tar --> /var/lib/minikube/build/build.1825600389.tar (3072 bytes)
I1009 18:40:49.994524   64947 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1825600389
I1009 18:40:50.002970   64947 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1825600389 -xf /var/lib/minikube/build/build.1825600389.tar
I1009 18:40:50.011594   64947 crio.go:315] Building image: /var/lib/minikube/build/build.1825600389
I1009 18:40:50.011669   64947 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-753440 /var/lib/minikube/build/build.1825600389 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1009 18:40:52.032653   64947 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-753440 /var/lib/minikube/build/build.1825600389 --cgroup-manager=cgroupfs: (2.020957179s)
I1009 18:40:52.032706   64947 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1825600389
I1009 18:40:52.041556   64947 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1825600389.tar
I1009 18:40:52.049812   64947 build_images.go:217] Built localhost/my-image:functional-753440 from /tmp/build.1825600389.tar
I1009 18:40:52.049845   64947 build_images.go:133] succeeded building to: functional-753440
I1009 18:40:52.049850   64947 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 image ls
I1009 18:40:52.349242   14880 retry.go:31] will retry after 20.21472021s: Temporary Error: Get "http:": http: no Host in request URL
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-753440
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-753440 /tmp/TestFunctionalparallelMountCmdspecific-port3245943124/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753440 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (290.290362ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 18:40:42.132477   14880 retry.go:31] will retry after 743.91377ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-753440 /tmp/TestFunctionalparallelMountCmdspecific-port3245943124/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753440 ssh "sudo umount -f /mount-9p": exit status 1 (273.058967ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-753440 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-753440 /tmp/TestFunctionalparallelMountCmdspecific-port3245943124/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-753440 /tmp/TestFunctionalparallelMountCmdVerifyCleanup817654199/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-753440 /tmp/TestFunctionalparallelMountCmdVerifyCleanup817654199/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-753440 /tmp/TestFunctionalparallelMountCmdVerifyCleanup817654199/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-753440 ssh "findmnt -T" /mount1: exit status 1 (347.770489ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 18:40:44.270651   14880 retry.go:31] will retry after 641.678071ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-753440 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-753440 /tmp/TestFunctionalparallelMountCmdVerifyCleanup817654199/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-753440 /tmp/TestFunctionalparallelMountCmdVerifyCleanup817654199/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-753440 /tmp/TestFunctionalparallelMountCmdVerifyCleanup817654199/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 image rm kicbase/echo-server:functional-753440 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-753440 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-753440 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-753440
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-753440
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-753440
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-073351 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.43s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-073351 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.43s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.22s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-073351 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-073351 --output=json --user=testUser: (1.218826653s)
--- PASS: TestJSONOutput/stop/Command (1.22s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-732537 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-732537 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (67.513724ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9f60528b-9a92-463d-b186-9728b6de2ec3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-732537] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8dbe9245-18df-40b7-9061-b829e3c2ee77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21139"}}
	{"specversion":"1.0","id":"0067e27a-2d86-4626-b04e-1dfd46992be8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"10a5abff-1166-4ce0-a021-1292ab1bdf51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21139-11374/kubeconfig"}}
	{"specversion":"1.0","id":"eee26a7b-08e8-446d-a8d0-0a5ba6571e88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-11374/.minikube"}}
	{"specversion":"1.0","id":"fb54aea2-72ae-45c9-a64e-eedd311f9b5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"eaa0d00a-f905-479e-bd16-bf4e332498e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c5d26f98-cbed-4319-b4a8-0584ffe286a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-732537" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-732537
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.87s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-027621 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-027621 --network=: (26.744716517s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-027621" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-027621
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-027621: (2.106910464s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.87s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.25s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-561631 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-561631 --network=bridge: (23.297553858s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-561631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-561631
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-561631: (1.929580072s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.25s)

                                                
                                    
x
+
TestKicExistingNetwork (24.48s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1009 19:17:36.408043   14880 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1009 19:17:36.425369   14880 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1009 19:17:36.425442   14880 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1009 19:17:36.425467   14880 cli_runner.go:164] Run: docker network inspect existing-network
W1009 19:17:36.441964   14880 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1009 19:17:36.442007   14880 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1009 19:17:36.442030   14880 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1009 19:17:36.442220   14880 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1009 19:17:36.460438   14880 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000376670}
I1009 19:17:36.460491   14880 network_create.go:124] attempt to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1009 19:17:36.460532   14880 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1009 19:17:36.520208   14880 network_create.go:108] docker network existing-network 192.168.49.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-230837 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-230837 --network=existing-network: (22.376221139s)
helpers_test.go:175: Cleaning up "existing-network-230837" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-230837
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-230837: (1.954203004s)
I1009 19:18:00.868773   14880 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.48s)

                                                
                                    
x
+
TestKicCustomSubnet (27.99s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-323149 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-323149 --subnet=192.168.60.0/24: (25.86108358s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-323149 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-323149" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-323149
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-323149: (2.108765345s)
--- PASS: TestKicCustomSubnet (27.99s)

                                                
                                    
x
+
TestKicStaticIP (26.14s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-090536 --static-ip=192.168.200.200
E1009 19:18:37.695331   14880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-11374/.minikube/profiles/functional-753440/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-090536 --static-ip=192.168.200.200: (23.883781205s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-090536 ip
helpers_test.go:175: Cleaning up "static-ip-090536" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-090536
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-090536: (2.126412721s)
--- PASS: TestKicStaticIP (26.14s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-838952 --memory=3072 --mount-string /tmp/TestMountStartserial99927433/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-838952 --memory=3072 --mount-string /tmp/TestMountStartserial99927433/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.061916555s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-838952 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.15s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-850859 --memory=3072 --mount-string /tmp/TestMountStartserial99927433/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-850859 --memory=3072 --mount-string /tmp/TestMountStartserial99927433/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.149264776s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-850859 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-838952 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-838952 --alsologtostderr -v=5: (1.652184376s)
--- PASS: TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-850859 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-850859
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-850859: (1.201122318s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.28s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-850859
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-850859: (6.279584902s)
--- PASS: TestMountStart/serial/RestartStopped (7.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-850859 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    

Test skip (18/166)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
Copied to clipboard